Where Was Climate Research Before Computer Models?

Guest essay by Dr. Tim Ball

I think, Essex and McKitrick’s Chapter, Climate Theory Versus Models and Metaphors, in their book Taken By Storm, is a very good analysis of the challenges facing climatology. They ask,

Do we have any clues at all on how to start the climb toward the summit of Mount Climate Theory? For a while in the 20th Century, it was looking good. Computers were appearing on the scene, and data were more systematically collected. Many scientists believed that putting in every more copious detail might pull off the climb. Sure, there would always be something missing, but with the aid of more data and the growing computational power, perhaps it wouldn’t matter. It didn’t before. What ultimately did happen in science surprised everyone, and it all had to do with turbulence.”

They raise the internal issue of turbulence, which is legitimate, if you assume the models are valid. I reject that assumption, as I explained in my recent article. I agree with their point that the models changed things, but suggest they are a regression rather than an advance. They created an illusion of possible resolution, with the claim that the only limit was computer size and power.

This raises the question of where we were on Mount Climate Theory before computer models appeared. Where was that relative to the normal progress of the scientific method? A comparison to the development of Darwin’s Theory of Evolution is helpful. The work of Carl Linnaeus (1707 – 1778) was critical to Darwin. The Linnaean system organized a multitude of data into patterns that allowed easier analysis and potential understanding of mechanisms. The Darwinian view is based on the Linnaean system that provides an over-arching or generalist view

 

Early Generalists

Three scientists from the early 20th century impacted our view and understanding of the world and climate. They knew each other well and worked together on global patterns. One was Milutin Milankovitch, Serbian mathematician and climatologist, whose work combined the effect of changes in Sun/Earth relationships on climate. Alfred Wegener contributed the continental drift theory that provides a fundamental foundation for geology. This has implications for climate through changing land/ocean ratios and latitudes, but also changes in volcanic activity. He married the daughter of Vladimir Koppen, whose training combined meteorology, climatology and botany. His system used plants as an indicator of climate to produce a global climate classification that is the basis of most systems since. Milankovitch said Koppen’s extensive understanding of global climate patterns helped him identify that 65° latitude temperatures were a critical measure.

All three saw their ideas challenged in the appropriate scientific way, but withstood attempts to disprove them. Despite this, the public is generally unaware of their work and its implications. They challenged prevailing views, which always creates a struggle. They also challenged the underlying view of uniformitarianism, the western scientific idea that change is gradual over long periods of time. A common denominator for all their ideas was lack of a mechanism that drove the discernible patterns and evidence. This parallels Darwin’s lack of knowledge about genes and DNA. Koppen’s classification was a model of greater reality and understanding than the computer models that purport to replace it. Koppen didn’t know about the problems with turbulence.

The Linnaean classification system that named, ranked and classified organisms, was a major advance in biology. Vladimir Koppen produced a climate classification system in 1884 that named, ranked, and classified climates and was a major advance in climatology. In my opinion, Koppen is where we are on Mount Climate Theory, with little or no advance because of the political abuse of climate science, the Intergovernmental Panel on Climate Change (IPCC), and their self-serving creations, the climate models.

There are nine major climate zones in the simplest form of Koppen classification (Figure 1). Zones 1, 3, 7 and 9 are singular with similar weather conditions all year influenced by one major control mechanism. The others, 2, 4, 5, and 6 are mixed weather conditions, because they’re under different control mechanisms as the seasons change.

clip_image002

Figure 1: Which region are you in?

Koppen created a system around these nine divisions that are based on average annual precipitation, average monthly temperature and precipitation. He identified six major divisions.

A. Tropical Humid

B. Dry

C. Mild mid-latitude

D. Severe Mid-latitude

E. Polar

H. Highland (added later).

He subdivided these into second and third divisions, based on unique temperature or precipitation conditions.

Unlike the IPCC, that focusses almost exclusively on temperature, Koppen recognized that water, in all its phases, was generally paramount. His B classification is the only one initially determined by annual precipitation, but in applying the classification system, you first determine if it is a B climate. It is not a B climate if there is sufficient precipitation to support trees. If it cannot, Koppen uses a sub-classification letter to separate regions that support grasses BS (for steppe grasslands), from no vegetation at all, BW (desert). A desert is hot or cold, defined by the lack of vegetation not temperature, so a third letter separates h (hot) or k (cold).. The North Pole, BWk, is a cold desert.

Koppen recognized another important issue called, the effectiveness of precipitation. A portion of rainfall is evaporated, what remains goes into the ground and is available for the plants. Koppen defined what was effective, that is available for the plants, by identifying three different annual patterns: rainfall year round; 70% in the summer; or 70% in the winter. Each may have the same annual total, but the amount left for the plants varies considerably.

Koppen modified his system, with revisions in 1918 and 1936, the latter some 52 years after his first publication. He was not done. He died in 1940 at the age of 90, but not before he had produced a more sophisticated system with Rudolf Geiger, another very important early climatologist. Geiger’s valuable book, Climate Near the Ground, published in 1950, was an important contribution to climate science. The Koppen-Geiger system is still in use (Figure 2).

The IPCC effectively exclude Geiger’s findings by using data from the Stevenson Screen between 1.25 and 2 m above ground, that is above the critical biospheric layer in which all interchange between the surface and the atmosphere occurs.

clip_image004

Figure 2: The generally linear pattern of climate is clear.

Practical Climatology

A major part of climate and therefore any climate model, is the movement of water through the Water Cycle. In 1931, Charles Thornthwaite (1889-1963) produced a classification similarly based on precipitation effectiveness and vegetation. It uses total monthly precipitation (P) and evaporation (E) to produce a P/E Index. In 1948, he modified it to include a moisture index that relates the amount of moisture a plant needs, Potential Evapotranspiration (PE), to the available supply, to produce the Actual Evapotranspiration (AE).

The 2007 IPCC Reports says little, but acknowledge lack of data and understanding.

There are very limited direct measurements of actual evapotranspiration over global land areas. Over oceans, estimates of evaporation depend on bulk flux estimates that contain large errors.”

The problem is, this is the major mechanism of transferal of heat energy in the global system.

In 1946 Thornthwaite opened the Laboratory of Climatology in New Jersey and in 1955 John Mather joined him. They produced a revised system that is the basis of most practical applications of climate work today. Experts, from irrigators to hydrologists, use variations of the model. For example, a paper studied the viability of predicting stream flow in Costa Rica. They concluded;

These results indicate that the Thornthwaite method can be satisfactorily applied to estimate mean monthly stream- flow in the uplands of Costa Rica.”

 

Another study used the model for Stormwater Management Planning in Ontario. The contributions of Thornthwaite to practical applications of climate were summarized in a 1996 biography, The Genius of C. Warren Thornthwaite, climatologist-geographer.”

The IPCC makes only one comment about Thornthwaite’s work. In his book, Climate Change: A Natural Hazard, William Kininmonth, former head of Australia’s National Climate Centre says,

“The simple one-dimensional energy balance model used by the IPCC to justify its radiative forcing hypothesis is unrealistic in its portrayal of processes at the earth-atmosphere interface.” The IPCC model suggests that the heat and latent energy exchange between the underlying surface and the atmosphere is a direct response to the imbalance of solar energy and terrestrial radiation at the surface. Such a proposal is at odds with the physics of the surface energy exchange processes.”

It’s one of many errors made to achieve a result; actions that are the opposite of even poor science.

Three scientists from the beginning of the 20th century had a profound impact on our view and understanding of the world and climate, yet are little known. Koppen, Wegener, and Milankovitch did more to help us understand the world and its dynamic systems than most. Some blame the education system used to indoctrinate, rather than teach. It’s the only explanation for continued teaching of a fixed pattern of sun/earth relationships when science knew over 100 years ago how much it changed? However, the biggest hindrance in the 20th century is the IPCC and governments who accepted their findings. This was reinforced by funding only research that proved their views. They settled the science.

I’ve been ridiculed for having a degree in climatology issued through a department of geography. Part of this personal attack is by self-proclaimed climate scientists, as discussed elsewhere, who usually can’t see the forest for the trees. Climatology was and remains a natural study area for geography. Alfred Hettner defined geography as chorology, about which he wrote,

“The goal of the chorological point of view is to know the character of regions and places through comprehension of the existence together and interrelations among different realms of reality and their varied manifestations, and to comprehend the earth surface as a whole in its actual arrangement in continents, larger and smaller regions, and places.”

It’s a summary of the challenges for climate science. I think it’s a challenge for all science that has dissected the world into individual pieces, but lacks the perspective and training to put it back together. Koppen, Wegener and Milankovitch knew. They would have known that applying temperature of a single station to the surrounding 1200 km radius area is wrong. Thanks to the IPCC we have not advanced from their point on Mount Climate Theory.

0 0 votes
Article Rating
144 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
JimS
October 22, 2014 12:11 pm

A very well made point, Mr. Ball. Thanks.

tadchem
October 22, 2014 12:16 pm

There is an old adage: “When the only tool you have is a hammer, everything begins to look like a nail.”
The corrolary for the Information Age is: “When you put everything into a digital model you may forget that Reality is analog.”

jono1066
Reply to  tadchem
October 23, 2014 12:57 am

although I still agree with your sentiment ; In the day when information was being used to greater and greater degrees the computers were analogue. not digital,
I guess you mean in the current `information overload` age

Cal65
Reply to  tadchem
October 23, 2014 12:59 am

Digital is just an approximation.

Danny Thomas
October 22, 2014 12:17 pm

Thank you Dr. Ball for an article that a non scientist such as myself is able to read and comprehend. Takes me way back to an Environmental Science class I took many (many) years ago.
One question I’ve wondered about is after reading elsewhere about 97% of Climate Scientists (insert this or that) is how is one proclaimed a Climate Scientist as used in this so called statistic? Can anyone enlighten me? Thanks in advance.

Reply to  Danny Thomas
October 22, 2014 1:14 pm

In the Farnsworth and Lichter (2011) study they defined climatologist as someone who had published over 20 papers with more than half related to climate studies. Here is a link to their paper but it is paywalled
http://ijpor.oxfordjournals.org/content/early/2011/10/27/ijpor.edr033.short
Although this seems a reasonable definition, most people publishing numerous climate studies these days are modelers, not climatologists in the traditional sense such as Dr. Ball. They know more about grid spacing, paramaterization, and Mbps than actual weather measurements or their relationship to the land.

Reply to  Danny Thomas
October 22, 2014 3:31 pm

Survey sent out by two university students to 10,237 recipients, approx. 3,240 replies, all but 77 discarded of which 75 said there was global warming equaling 97.4%.
QED
And that is how you bend statistics.

October 22, 2014 12:30 pm

“There are very limited direct measurements of actual evapotranspiration over global land areas. Over oceans, estimates of evaporation depend on bulk flux estimates that contain large errors.”
The problem is, this is the major mechanism of transferal of heat energy in the global system.

No, that would be radiational heat transport!

latecommer2014
Reply to  Phil.
October 22, 2014 3:37 pm

I disagree,evapotransportation is how the climate makes adjustments to positive and negative forcing of which radiational heat transport is part of the mechanism.

tty
Reply to  latecommer2014
October 23, 2014 12:39 am

latecommer2014
Even the IPCC admits that convection is the main heat transfer process, though this is not obvious from their figures. They manage to obscure this by showing gross radiatiational heat transport (the famous “back radiation”) but only net convective transport in all figures. This would only be correct if descending air and rain was at 0 degrees Kelvin.

Konrad.
Reply to  Phil.
October 22, 2014 7:38 pm

”this is the major mechanism of transferal of heat energy in the global system”
This statement is correct. Radiation is how energy enters and leaves the global system, but within it, evpotranspriation combined with convection is the primary energy transport. Radiative “ping pong” within the Hohlrumn of the lower atmosphere is an irrelevance.

mikewaite
October 22, 2014 12:42 pm

I have looked for textbooks on climate modelling methods on the usual used- book marketplaces and found the following:
– Climate modelling primer- by McGuffie
– Climate system modelling – by (ahem) Trenbeth
If one could only justify buying one , which is to be preferred if the aim is to understand more fully the arguments put forward on this and the other climate related sites?

mpainter
Reply to  mikewaite
October 22, 2014 1:10 pm

A waste of money. Climate processes cannot be faithfully modeled, mainly because of our ignorance of the subject. One cannot model what one does not understand.
GCM’s are merely projections of climate sensitivities, with squiggles. These CS figures are all based on doubtful assumptions.
Save your money, save your time.

Reply to  mikewaite
October 22, 2014 1:33 pm

Perhaps neither. Might I suggest simply downloading the free documentation for NCAR CAM3–one of the core GCM progenitors. Title is NCAR/TN-464+STR (2004). Anyone can follow the logic, if not all the partial differential physical process equations. Particularly interesting is the needed parameterization in sections 4.5-4.9 and 7. Thatbisnwhere the main fail arises, as predicted by Asafoku in 2007. Necessary because the grid scales are necessarily an order of magnitude larger than the convective processes of Lindzen’s ‘adaptive iris’. See essay Models all the way down in ebook Blowing Smoke. Also, the climate chapter of The Arts of Truth ebook has about 20 pages of layman level explanation based on this NCAR documentation, including how AR4 engaged in gross selection bias to claim GCM outputs match well to observations, when they don’t. The unmodeled pause is just a consequence of that more detailed set of facts.

jorgekafkazar
Reply to  Rud Istvan
October 22, 2014 3:20 pm

Thanks, Rud Istvan. I wish someone would do a detailed public dissection of that or a similar model. Or, better, a post-mortem.

JohnD
Reply to  mikewaite
October 22, 2014 7:17 pm

I have not checked it out, but MIT offered a MOOC on climate models, Currently not being offered, but course materials on line to audit class.
https://www.edx.org/course/mitx/mitx-12-340x-global-warming-science-1244#.VEhfNRb6cms

mikewaite
Reply to  JohnD
October 23, 2014 12:57 am

Thank you everyone for the suggestions , I will check them out.

Toto
October 22, 2014 12:43 pm

Excellent. “Three scientists from the beginning of the 20th century had a profound impact on our view and understanding of the world and climate, yet are little known.” Litmus test for wannabe scientists: Do you want fame, glory, and riches or do you want to make a profound impact on our understanding?

jorgekafkazar
Reply to  Toto
October 22, 2014 3:22 pm

I think that litmus test needs a second indicator: Do you think you’re a Messiah?

bil
Reply to  Toto
October 23, 2014 7:44 am

I was taken by the “little known” quote. I remember studying all of them them in O-level (high school) Geography lessons in the early 80’s. I would change “little known” to “unfashionable”.

October 22, 2014 12:50 pm

To confuse Earth and its atmosphere with a one-dimensional model thus arriving at the radiative forcing hypothesis is to apply the fallacy that is known in logic as “misplaced concreteness.”

Editor
October 22, 2014 12:51 pm

Which region am I in?
Well, sometimes it rains a lot, sometimes it is dry.
Sometimes it is hot, sometimes it is cold, or mild or windy,
Sometimes it snows.
Sometimes we have warm winters and cold summers. Sometimes we have snowy springs and autumn heatwaves.
But, always, we lose on penalty shootouts.
Need any more clues?

michael hart
Reply to  Paul Homewood
October 22, 2014 4:29 pm

What region would you like to be in? There is a model that will satisfy your desire. 🙂

ferdberple
Reply to  Paul Homewood
October 22, 2014 7:07 pm

vancouver

Reply to  Paul Homewood
October 23, 2014 1:07 am

holland

Gary
October 22, 2014 1:05 pm

Another area of fruitful research before computer models not mentioned in this article is the paleo-oceanographic work that proved the theories of both Milankovitch and Wegener to be correct. Sediment cores collected during the period from 1950-1970s supplied the raw material for the landmark paper by Hayes, et al. showing the cyclic pattern of ice ages http://www.sciencemag.org/content/194/4270/1121. Paleomagnetic measurements across the mid-ocean ridges conclusively demonstrated seafloor spreading and continental drift. The rise of oceaongraphy as a discipline in the 1960s and extensive data collection at sea (not the least of which is the Deep-Sea Drilling Project) has provided spacial and chronological coverage of the 70% of the earth’s surface not mapped by the Koppen-Geiger climate zones.

milodonharlani
Reply to  Gary
October 22, 2014 7:17 pm

Also, there are extinct biomes which reemerge during glacials, such as “temperate” zone tundra. And bear in mind that glacials occupy an order of magnitude more time during the Pleistocene than interglacials.

October 22, 2014 1:05 pm

“They would have known that applying temperature of a single station to the surrounding 1200 km radius area is wrong. ”
It helps if you actually understand what the method does.
it doesnt APPLY the temperature of a single station to the surrounding 1200km
The method says.. If you have no measurement AND you want to estimate the missing data
which do you choose?
A) the closest data
B) the average of All the data.
Where ‘the closest data” is defined as any data within 1200km.
In other words. When constructing a spatial average with missing data you cannot avoid infilling
cannot avoid it!!. If you choose to leave missing data as “missing” you have, in effect, given it the value
of the ‘average of the whole’
When faced with missing data Hansen for example uses the following rule.
A) If there is data within 1200km, USE THAT to estimate the missing data.
B) if there is no data within 1200km, then the missing data is equal to the global average.
Why 1200km? Well look at correlation length. As long as you have data within 1200km
you will have a LOWER ERROR by infilling with that data, rather than ‘leaving it blank”.
Remember leaving data “blank” IS THE SAME as saying its value is equal to the average of
all the data.

ARW
Reply to  Steven Mosher
October 22, 2014 1:41 pm

Just out of interest, by using the 1200km rule, what percentage of points are “estimates” as opposed to “measured” when considering a) terrestrial temperature data bases and b) oceanic surface temperature data bases? An additional question would be- are there latitude and elevation adjustments applied to the estimated point temperature considering the location of the nearest station?

Reply to  Steven Mosher
October 22, 2014 1:59 pm

The method says.. If you have no measurement AND you want to estimate the missing data
which do you choose?
A) the closest data
B) the average of All the data.

C) admit you don’t know so don’t guess.
Much of the past (and even present) data collected is data for local conditions, never meant for any “global” application. The “missing data” isn’t missing. It never existed. CGI is great for movies. But “CGData” is not reality.

Alx
Reply to  Gunga Din
October 22, 2014 6:22 pm

That is a compelling observation, the data was never meant for global application.
The temptation was computers, computers allow the handling and manipulating of large amounts of data. The sin was in giving into temptation by ignoring option C.

Konrad.
Reply to  Gunga Din
October 22, 2014 7:44 pm

Due to obvious unresolvable macro and micro site issues, even the available surface station data was clearly unfit for purpose. “C” is the correct answer, but they didn’t want that answer. That “C” was ignored speaks to motive.

Gavin Hetherington
Reply to  Steven Mosher
October 22, 2014 2:59 pm

I’ve lost count of the number of times I’ve heard variations of “We know it’s not perfect but it’s the absolute best we’ve got” since I’ve been taking an interest in the climate debate. You can add your own analogy.

Brian H
Reply to  Gavin Hetherington
October 24, 2014 11:20 am

Since the Great Dying of the Thermometers in 1990 (4400 of 6000 global weather stations disregarded) the 1200 km rule has got much more exercise. Those stations are mostly still there, and working, and still disregarded.

jorgekafkazar
Reply to  Steven Mosher
October 22, 2014 3:29 pm

so there are varying degrees of ignorance, and the new, improved ignorance is better than the ordinary ignorance. thanks so much for clarifying.

joeldshore
Reply to  Steven Mosher
October 22, 2014 5:03 pm

Steven,
You make a good point but another point about how Tim Ball is misrepresenting what they do is that they do this for temperature ANOMALIES not temperatures themselves…and this is because anomalies are correlated over a much larger distance than temperatures. This is such a basic point that one wonders why Tim Ball continues to make a claim that someone with his training ought to know is wrong.

milodonharlani
Reply to  joeldshore
October 22, 2014 6:46 pm

It is 979 km from Death Valley National Park, CA to Mt. Elbert, CO. In such a common case, it would be much better simply not to include unsampled areas than to pick the nearest measured spot or take the global average.

Konrad.
Reply to  joeldshore
October 22, 2014 7:52 pm

Joel,
using anomalies as opposed to raw temperature still cannot correct for gradual microsite degeneration or UHI contamination. The surface station data is good enough for short term weather purposes. But for climate it was unfit for purpose. The “but, but, it’s all we’ve got” excuse is no good, this approach should never have been used.

beng
Reply to  joeldshore
October 23, 2014 10:53 am

joeldshore, that thought occurred to me too — that some posters don’t understand this, but I didn’t think for a microsecond that Dr Ball misunderstands it.

NZ Willy
Reply to  Steven Mosher
October 22, 2014 6:30 pm

“cannot avoid it”, my arse. You exclude the missing areas and increase the error bars. Can avoid, no problem, unless you need the whole-world data as a platform to a bunch of other stuff, then, yeah, GIGO.

Alx
Reply to  Steven Mosher
October 22, 2014 7:08 pm

“When faced with missing data Hansen for example uses the following rule.”
A) If there is data within 1200km, USE THAT to estimate the missing data.
B) if there is no data within 1200km, then the missing data is equal to the global average.

Missing data is missing data, making data up according to a rule is speculation. Speculation in building a hypothesis is fine, speculation in building measurement data is insane.
Yes you have lower error at 1200km, lower at 600km, lower still at 300km and still lower at 100km. So why not use 100km, maybe because it becomes obviously meaningless. Unfortunately, 1200km is also meaningless, it is a huge area and temperature can vary a great deal within 1 km never mind 1200km.
The global average is made with fill-ins of missing data, so have no idea how using it to fill in missing data is viable. Is the snake eating it’s own tail?

If you choose to leave missing data as “missing” you have, in effect, given it the value
of the ‘average of the whole’

No, in no way do I give anything the value of the whole. I am one of those people who think coming up with a annual global temperature to tenths of a degree is a fools errand. It may be possible someday if we limit it only to land surface termperature, and do not mis-represent it as “global temperature”. Put simply, there is way too much to measure and making up measurements only magnifies that issue and no way mitigates it.

trafamadore
Reply to  Alx
October 23, 2014 4:45 pm

don’t you understand? leaving it blank (for missing data) is the same as using the average value. It’s a math thing.

Reply to  Alx
October 24, 2014 10:57 am

As noted below you DONT GET IT. leaving cells blank is THE SAME as infilling with the average of all data.
here let me show you, from a post I did. This focuses on the arctic.
http://judithcurry.com/2014/02/25/berkeley-earth-global/#more-14768
“The reason for looking at these different approaches will also allow us to make observations about the choice that HadCrut4 makes. In their approach they leave these grids cells empty. Let me illustrate the different approaches with a toy diagram:
3 3 3 3 3
3 5 5 5 3
3 5 NA 5 3
3 5 5 5 3
3 3 3 3 3
Table A
In table A the average is 3.67 when we compute the average over the 24 cells with data. That is operationally equivalent to table B.
3 3 3 3 3
3 5 5 5 3
3 5 3.67 5 3
3 5 5 5 3
3 3 3 3 3
Table B
Such that when we refuse to estimate the missing data that has the same result and is operationally equivalent to asserting that the missing data is the average of all other data.
When we estimate the temperature of the globe we are using the data we have to estimate or predict the temperature at the places where we have not observed. In the Berkeley approach we rely on kriging to do this prediction. I found this work helpful for those who want an introduction: http://geofaculty.uwyo.edu/yzhang/files/Geosta1.pdf . Consequently, rather than leaving the arctic blank, we use kriging to estimate the values in that location. This is the same procedure that is used at other points on the globe. We use the information we have to make a prediction about what is un observed. In slight contrast, the approach used by GISS is a simple interpolation in the arctic. That would yield table C and an average of 3.72 as opposed to 3.67. (Note that there are times where the interpolation result will give the same answer as a Krig. ) Both approaches, however, use the information on hand to predict the values at unobserved locations.
3 3 3 3 3
3 5 5 5 3
3 5 5 5 3
3 5 5 5 3
3 3 3 3 3
Table C
The bottom line is that one always has to make a choice when presented with missing data and that choice has consequences; sometimes they can be material. Up to now the choice between ignoring the arctic or interpolating hasn’t been material. It may still not be material, but it’s technically interesting.”
in short
You have to make a choice.
Leaving the data MISSING is the SAME as infilling with the average of the whole. See Table B
The goal is computing the average trend of the world.
When you have missing data you have TWO choices.
1. Leave it blank
2. explicitly infill it.
When you leave it blank, your ANSWER is MATHEMATICALLY the SAME as infilling with the average of the whole. you are implicitly asserting without arguing that the unsampled area is the same as the average of the whole. you are implicitly accepting without argument the position that unsampled areas
are not different than the whole.
If you choose to explicitly infill it then you will do a better job if you consider data that is geographically
close. you can define “close” by calculating a correlation length.

milodonharlani
Reply to  Steven Mosher
October 22, 2014 7:25 pm

One of the many reasons why your worse than worthless GIGO GCMs have failed so miserably in predicting even the climate of 1997-2014, let alone 2100, is that they have not been updated to take into account the many discoveries in climatology since 1980, made in spite of the CACA mafia.

milodonharlani
Reply to  milodonharlani
October 22, 2014 7:28 pm

PS: I’m doing all I can to ensure that after the 2016 elections, the US will join the heroic leaders of Canada & Australia in calling BS on CACA. End the insanity now.

sturgishooper
Reply to  Steven Mosher
October 22, 2014 7:34 pm

When the funding changes with a new administration, the “consensus” will change, too.
Whose tune will you sing then, Steven, when the piper pays only for a different tune?

Reply to  sturgishooper
October 24, 2014 10:44 am

I sang the same tune from 2007 to 2013, when nobody paid me.
in 2013 the Koch brothers paid me. the answer got more certain.
Today, both left and right, both green and non green pay the bill.
the answer gets more certain.
Bottomline: when you spend 6 years working for free, nobody can buy you.

Reply to  Steven Mosher
October 22, 2014 8:50 pm

No what you do is get the data from weather balloons lofted by Russians or dropped by Americans from aircraft.
And when you do that you discover the models have made up the data instead of using real data. And that is why by 1990 the found warming where there had been cooling.
JONATHAN D. KAHL*, DONNA J. CHARLEVOIX*, NINA A. ZAFTSEVA†, RUSSELL C. SCHNELL‡ & MARK C. SERREZE§
Absence of evidence for greenhouse warming over the Arctic Ocean in the past 40 years. Nature 361, 335 – 337 (28 January 1993); doi:10.1038/361335a0
http://www.nature.com/nature/journal/v361/n6410/abs/361335a0.html

Reply to  Steven Mosher
October 22, 2014 10:13 pm

Hehe, you don’t want to get it, do you?
The problem with the 1200 km smoothing (infilling) isn’t applying it where you don’t have data. It’s applying it where you DO have data, but choose to ignore it. This is what GISS does in the Arctic and the Antarctic:
http://bobtisdale.wordpress.com/2010/05/31/giss-deletes-arctic-and-southern-ocean-sea-surface-temperature-data/

Reply to  Steven Mosher
October 22, 2014 11:01 pm

As long as you have data within 1200km you will have a LOWER ERROR by infilling with that data, rather than ‘leaving it blank”.

Bollocks … 1200km north or south is a huge difference anywhere in world … completely different climatic zones. This is precisely why the Australian BoM has cocked up our temperature record by infilling with garbage ‘data’ estimates.

Reply to  Streetcred
October 22, 2014 11:05 pm

I’ll add to that the Republic of South Africa with the vast change between the humid tropical east coastal strip and the Highveld less than 800km away to the west and 3,000 ft higher in elevation.

Reply to  Streetcred
October 22, 2014 11:06 pm

… make that 4500 ft elevation.

Reply to  Streetcred
October 24, 2014 10:28 am

Wrong.
Here is your choice:’
A) use data from within 1200km
B) use all the data.
if you TEST this choice you will see that on average your error is lower by using closer data.
Go test it. I dare you.
Pick any station in the US. Hide the data. hold it out.
Next; using only data witin 1200km, estmate that held out data.
Next: using all the data, estimate that held out data
Next: take the held out data and compare it with both estimates. Compute the error
This is known as hold one out.
Next, do this for all 40,000 stations in the world. One by one “hold one out”
One by one, use data within 1200km, then use all the data.
Here is what you will find.
On average over all 40000 cases, the estimates of the held out data are BETTER if you use data
within 1200km, than if you use all the data.
So:
A. you must infill. you have no choice in the matter. refusing to infill is mathematically equivalent
to imputing the average of all the data to the missing data.
B) When you infil you have a choice: use data from nearby stations. Use data from all other stations.
C) If you define nearby as 1200km, you will reduce the error of your estimate.
A,B and C are just facts. not much sense in denying facts

tty
Reply to  Steven Mosher
October 23, 2014 12:50 am

The real fun starts when you use the 1200 km rule to “homogenise” data, like when you use temperature data from the Faeroes which has one of the most maritime and least variable temperature regimes in the World to “correct errors” in temperature data from Iceland which has extreme year-to-year variation due to variable ice conditions.

MikeB
Reply to  Steven Mosher
October 23, 2014 2:06 am

Steve, I take your point that if you don’t ‘infill’ then you effectively assign the overall global average to the unmonitored areas
However, there is a big difference between interpolation and extrapolation. Interpolation may be justified.
Temperature anomalies are extrapolated for Arctic regions from warmer stations to the south. Coincidently, it is claimed that the Arctic is warming faster than the global average. But this is the region where there are no thermometers and extrapolation has been used (by GISS). Something suspiciously convenient about that truth, don’t you agree?
Remove the extrapolation and you have no warming.

Tom O
Reply to  Steven Mosher
October 23, 2014 6:36 am

How can using “in fill” from across 1200 km be of any use, really? A real life example. Steve, is that in Maine, where I lived, myhouse was 5 degrees cooler than those just a mile away. The temperature in my town could be 71 degrees, while the towns in a 5 mile radius could be anywhere from 3 degrees cooler to 6 degrees warmer? Here in Phoenix, or the metroplex, at least, there can be an 8 degree variation across the entire region, and 40 miles away you have desert that can be warmer – or cooler – still. you can’t accurately use an infill across any region in the world that large if the temperature can vary 8 degrees in a 10 sq mile region. Get real. Mathematics are useful to suggest conditions and situations, but they do NOT prove them, nor do they “model” them..

Duster
Reply to  Steven Mosher
October 23, 2014 1:46 pm

It is not at all clear that a “lower error” is a “good thing” in science. All that has really been “lowered” is statistical estimate of the error, and that is based in turn upon a statistical estimate of real world states for which we have no sound estimates. A 1,200 km radius circle has an (Euclidean) area of well over 1,600,000 square miles. Estimating the area as a surface of a sphere yields an even larger area. That is essentially a claim that infilling temperature for eastern Nevada based on a station in San Francisco makes sense. You might support such an argument for estimating the temperature of the mid-Willamette Valley since the two locations are in a somewhat more comparable relation to marine influences. The short version is that while “in-filling” may be necessary mathematically, geographically it makes no kind of sense.

milodonharlani
Reply to  Duster
October 23, 2014 2:16 pm

See my comment above on Death Valley & Rocky Mountain peaks.
But “in filling” is more likely to occur in a less well sampled part of the planet, say a coastal Chilean station for a huge swath of the South Pacific Ocean.

Reply to  Duster
October 24, 2014 10:18 am

“It is not at all clear that a “lower error” is a “good thing” in science.”
A) a lower error beats a bigger error
B) all science has error, the goal is reducing error
“All that has really been “lowered” is statistical estimate of the error, and that is based in turn upon a statistical estimate of real world states for which we have no sound estimates. ”
A) wrong. Its based on the data.
B) the estimates are sound you can test them.
“A 1,200 km radius circle has an (Euclidean) area of well over 1,600,000 square miles. Estimating the area as a surface of a sphere yields an even larger area. That is essentially a claim that infilling temperature for eastern Nevada based on a station in San Francisco makes sense.”
A) you CANNOT AVOID infilling
B) Refusing to infill in MATHEMATICALLY EQUIVALENT to infilling with the average of all data.
C) you have one choice: estimate Eastern Nevada with SF or Estimate eastern Nevada with the REST
of the planet. You can in fact TEST this choice by doing hold out analysis. It is ALWAYS better
to estimate with data from nearby than to leave it blank. Leaving it blank inputs the average of
of the rest of the world to the location.
“You might support such an argument for estimating the temperature of the mid-Willamette Valley since the two locations are in a somewhat more comparable relation to marine influences. The short version is that while “in-filling” may be necessary mathematically, geographically it makes no kind of sense.”
A) when you look at trends, which is what people do, then geography doesnt play that big of a role.
This is provable. Of course for absolute temps then Mid Willimette may be better. Actually anything
at the same latitude and same altitude will do, since altitude and latitude account for over 90% of
the variance.
B) since infilling is mathematically necessary it follows that one should infil with the best data.
The best data is the closest data. without a doubt. without a doubt you will get a better answer
if you infil with any data within 1200km rather than data outside 1200km. That’s provable, and proven.

otsar
October 22, 2014 1:16 pm

1. Good article.
2. If I remember correctly Koppen used soils as one of his inputs for climate classification.

milodonharlani
Reply to  otsar
October 22, 2014 2:39 pm

On CO2, climate & soil, please see Dyson clip below.

October 22, 2014 1:17 pm

‘I’ve been ridiculed for having a degree in climatology issued through a department of geography.’
To my understanding one of the most significant of all climatological effects is continental drift. The migration of what is now Antarctic to the South Pole seems the most viable explanation for the transition of the Earth’s climate to the Ice Age and mini ice ages that followed. It would seem bizarre to ridicule somebody who acquired their degree in climatology through a department of geography.

Alx
Reply to  Tom J
October 22, 2014 6:24 pm

You do not get ridiculed for acquiring a degree in climatology through a department of geography. You get ridiculed for not agreeing with AGW.

milodonharlani
Reply to  Alx
October 22, 2014 6:26 pm

Correctamundo. If you’re heterodox, your enemies will seize upon whatever chink in your armor they think will work.

mpainter
October 22, 2014 1:46 pm

Geography determines climate.
The use of the term “climate change” is sloganeering. Climate does not change.
The global climate models are merely projections of “climate sensivity”, which figure is most dubiously derived and ranges from under 1 K to over 8 K, according to the lights of the person who does the “figuring”.

Pamela Gray
Reply to  mpainter
October 22, 2014 7:57 pm

Yes. Climate is bound by GPS address, not CO2. Weather pattern variations meander weather trends back and forth between the extreme boundaries of what can happen at that GPS address. And those patterns are determined by semi-permanent large and random regional sized oceanic/atmospheric teleconnected pressure systems. A slight change in a trace gas that is only a tiny fraction of our atmosphere cannot have an affect on such a system.

milodonharlani
October 22, 2014 2:04 pm

Dyson regrets climatology’s getting hijacked by modelers when what it needed 30 years ago & now are more data:

milodonharlani
Reply to  milodonharlani
October 22, 2014 2:14 pm

We were among the first in Umatilla County to adopt no-till. Today I could probably have applied for carbon offsets.

milodonharlani
Reply to  milodonharlani
October 22, 2014 2:18 pm

Strange that now Dyson’s belief that Nature can be improved upon should be considered shocking by some.

NZ Willy
Reply to  milodonharlani
October 22, 2014 2:48 pm

Pursuant to that, Dyson and others at Princeton were doing laboratory testing of CO2 in the 1950’s to experimentally determine its heat retention. That’s the way to do science!

milodonharlani
Reply to  NZ Willy
October 22, 2014 3:10 pm

Dyson makes the point that he, heir to Einstein, can afford to be a heretic, but that what is needed is more young heretics, who can’t be bought off.

Reply to  milodonharlani
October 22, 2014 3:50 pm

Interesting comments by Dyson, and good to hear.
It seems, however, at least in this segment that he doesn’t quite follow the logic through. Specifically, he spends most of the segment talking about how increased CO2 results in increased biomass growth. Then he turns to whether we can use this fact of increased biomass growth to intelligently manage land in order to “stop CO2 from increasing.” But why in the world would we want to stop CO2 from increasing in the first place? By his own discussion, one of the great things about increased CO2 is increased biomass growth.
In any event, it is not clear from this segment whether he really thinks global warming is a problem and can be managed, or whether he thinks the whole concept is on shaky ground. It seems like the former. That is well and good, and will probably garner a larger audience. But why global warming would be viewed as a negative in the first place is a very fundamental question that needs to be addressed.

milodonharlani
Reply to  climatereflections
October 22, 2014 6:04 pm

My take is that he sees CO2 growth so far as a good thing, but that if it should pose some threat, which he doesn’t rule out, it can be regulated by land management & even genetic engineering, among other sink control mechanisms. The take away is that it’s not a problem, is now a good thing & could be in future as well.

NZ Willy
Reply to  climatereflections
October 22, 2014 6:37 pm

There are limits to everything and CO2 increase may be beneficial now but *if* it increases monotonically over centuries then it would eventually be like living in a sewer surrounded by septic waste. I think Dyson was talking to that, keeping the “if” in mind.

milodonharlani
Reply to  climatereflections
October 22, 2014 6:49 pm

Willy:
Even the Father of Global Warming, Wallace Broecker, admits that within 1000 years (less IMO) any man-made CO2 increase would be absorbed by natural sinks. Science still doesn’t really have a good handle on the sinks, perhaps because the CACA religion discourages investigations that might produce inconvenient truths.

Alx
Reply to  milodonharlani
October 22, 2014 5:58 pm

Yes, it is so blindingly obvious, it is stupefying that bad data or not well understood data can be used for a basis for anything other than taking up space on a hard disk.
Get the data right and if you can’t get it right past x years into the past or cannot get it right without starting over from today with new systems and process, then admit it and get to work, do it, start over. It’s not the first time a path in science had to be scrapped and a new direction taken.

rd50
Reply to  milodonharlani
October 22, 2014 7:42 pm

embarrassing

pekke
October 22, 2014 2:26 pm
milodonharlani
October 22, 2014 2:42 pm

Some so-called “climate scientists” today have degrees in computer modeling, so Dr. Ball’s degree is more relevant to real climatology.

October 22, 2014 2:51 pm

Where Was Climate Research Before Computer Models?
————
Still putting on its shoes, while AGW had traveled round the world many times.
(lies propagate faster now than they did in Twain’s time)

rd50
Reply to  Mark and two Cats
October 22, 2014 8:21 pm

Climate research before computer models?
No problem.
Before the “Computer Models” scientists proposed that 9 different climate areas would be about right to describe the “climate” in our world, surface only. We learned them from our teachers.
We learned them in school. They made sense. No statistical analysis needed.
Can you imagine otherwise than dividing the planet earth other than by different areas, as previous scientists actually found by traveling and observing outside their offices. You can argue, maybe 9 areas maybe 7 or maybe 11.
The big advance with computer models is forget differences. AVERAGE is the norm with computer models.
You must be kidding. It was a lot more interesting before computer models and well…much more logical to teach us that “differences” instead of “averages” existed in our world.

October 22, 2014 2:52 pm

Tim Ball is going from strength to strength while his would-be nemesis Mann is flouncing around the world like a jaded circus act. Good.

Reply to  John Shade
October 22, 2014 3:12 pm

+10

Alx
Reply to  John Shade
October 22, 2014 5:50 pm

Circus act. Perfect description.
Put a red ball on his nose, giant shoes on his feet and I do think he would make a great clown. Alas, Mann has missed his calling.

Svend Ferdinandsen
October 22, 2014 3:06 pm

I have never understood the excuse form the modellers about lack of precision of the start condition.
They average over tens of years and the whole globe, so what could the startcondition really mean in the long run.
If i take their words for it, it must mean that the butterfly effect is real, and then no model could ever do any forecast. You can not model butterflies, birds, forest fires and whatever. Not even a climate scientist taking a plane to some exotic place.

joeldshore
Reply to  Svend Ferdinandsen
October 22, 2014 5:42 pm

So, I can’t predict that it going to be colder here in Rochester, NY in January than it is in July? That’s an interesting hypothesis!

Reply to  joeldshore
October 22, 2014 11:15 pm

There’s your problem … most of us don’t need a ‘climate model (TM)’ to do that.

Pethefin
Reply to  joeldshore
October 22, 2014 11:41 pm

Hmm, downgrades. The trolls aren’t what they used to…

joeldshore
Reply to  joeldshore
October 23, 2014 1:24 pm

Streetcred: It’s irrelevant whether or not you could get the answer other ways. What Svend is essentially saying is that a climate model could never be used to predict the seasonal cycle because of the “butterfly effect”. That is clearly wrong.
Saying a system is chaotic does not mean that it’s behavior can’t be modeled. What is means is that behavior that is extremely sensitive to initial conditions cannot be modeled (at least very far out in time). However, other behaviors, such as the variation of climate with season (due to the large local changes in solar irradiance) or the variations in climate with changes in the Earth’s radiative balance CAN be modeled.

mpainter
Reply to  joeldshore
October 23, 2014 2:47 pm

Joeldshore:
“Earth’s radiative balance can be modeled”
≤<<<<<<<<<<<<<<>>>>>>>>>>>
Not very well when it is misunderstood, as the product of the GCM’s show.

Konrad.
Reply to  Svend Ferdinandsen
October 22, 2014 8:19 pm

Svend,
you are correct, this kind of modelling was never going to work. It is not just that the initial conditions were imprecise, but the parametrisations within the GCMs, particularly for energy vertical transport are provably in error.
Meteorologists use very similar models to Climastrologists, but in contrast they understand the work of Konrad Lorenz. They know their model will become increasingly inaccurate the longer they run it because of both inaccuracies in initial conditions and internal parametrisations..
What meteorologist do is a “Monte Carlo run” . Here they run the model several times with intentionally perturbed initial conditions. If after a short time the runs diverge greatly they can only make a short term weather prediction. If they stay close in their predictions, they can make a longer term prediction. One thing meteorologists would never do is average the runs and call that a prediction.

joeldshore
Reply to  Konrad.
October 23, 2014 1:26 pm

“One thing meteorologists would never do is average the runs and call that a prediction.”
That’s because meteorologists are predicting the WEATHER and climate modelers are predicting the AVERAGE WEATHER, i.e., the CLIMATE.

Konrad
Reply to  Konrad.
October 23, 2014 5:18 pm

Climate may indeed be the average of weather, but averaging diverging climate model runs is still wrong. Meteorologists know that divergence = no long term predictive skill. Over 70 climate models have been shown to diverge from each other and all diverge from reality.

Duster
Reply to  Svend Ferdinandsen
October 23, 2014 1:56 pm

The Butterfly Effect is very real. Edward Lorenz, who first identified it mathematically while trying to model weather, pointed out that “sensitivity to initial conditions” is critical and over spans of less than a month, any prediction will begin to hunt badly, diverging from reality steadily. More importantly the system under the influence of a strange attractor will oscillate in a quasi-cyclic fashion, orbiting one state very closely at times and diverging strongly at others, until it abruptly flips and begins cycling around a different state. The various climate cycles seem to all be varieties of this. The team approach, which attempts to linearize “climate,” is essentially 18th century analysis with lipstick.

Pamela Gray
October 22, 2014 3:07 pm

Let’s get back to the central reason for defining climate and weather zones: Agriculture and safety. NOAA updated their forecast zone boundaries about 5 years ago. Why? So that weather forecasts were specific to climate zones. How did they come up with these boundaries? Two ways. Sensor measurements and the fact that the same weather system affects these zones in unique ways based on their topographical features. There is no smearing here. No empty mountain grids that get painted with a coastal brush. Someone put feet to the ground and eyes in the sky in order to more accurately say that snow will be here but not there. Frost will affect this area but not that one. These types of forecasts are used for both safety and agricultural purposes. What else matters? If I forget to take my umbrella to work and my hair gets wet, who cares. But if an entire crop of corn is wiped out, that matters. Or if a coastal area was caught uninformed, heads should roll.
Cut the climate budget and make each and every employee focused on two things: Agriculture and public safety now and 10 years ahead. No farmer and certainly no citizen cares about agriculture and safety 50 years from now. Farmers want to know what to plant now and what to plan for 5 to 10 years out, and citizens want to know whether or not to sand bag their house or stock up on winter heating and restock the larder. Throw the rest of the departments out the door and their pink slips with them.
http://nws.noaa.gov/mirs/public/prods/maps/pfzones_list.htm

mpainter
Reply to  Pamela Gray
October 22, 2014 3:15 pm

Good comment

Alx
Reply to  Pamela Gray
October 22, 2014 5:47 pm

“Throw the rest of the departments out the door and their pink slips with them.”
Amen.

October 22, 2014 3:10 pm

Unlike the IPCC, that focuses almost exclusively on temperature, Koppen recognized that water, in all its phases, was generally paramount.
Oh my goodness! On a water planet we are allowed to mention that water in all its forms is the dominate feature of our climate. Shazam Andy!

Samuel C Cogar
October 22, 2014 3:28 pm

Some blame the education system used to indoctrinate, rather than teach”.
——————–
It’s a lot easier to indoctrinate than it is to educate.

John Boles
October 22, 2014 3:42 pm

Non-linear, chaotic, and coupled system, with cloud effects not fully understood, and other significant factors not well understood, and the grid size is a bit big. Climate models fail big time, they are worthless.

mpainter
Reply to  John Boles
October 22, 2014 4:02 pm

John Boles:
You put it all in a nutshell. Advocates of climate modeling refuse to acknowledge these deficiencies in their science and pretend that they are doing worthwhile science. And when the models fail signally they ignore that.

milodonharlani
Reply to  John Boles
October 22, 2014 4:31 pm

For formulating public policy, GCMs are worse than worthless. Their only utility is to show that CO2 is not the control knob on climate, since they have failed so miserably as a result of programming that GIGO assumption in order to obtain the desired result.

Reply to  milodonharlani
October 22, 2014 11:18 pm

Ah! But for promoting socialism they have been an exceptional tool.

milodonharlani
Reply to  milodonharlani
October 23, 2014 11:13 am

Sad but true. The worm however may finally be turning, thanks to Mother Nature’s lack of cooperation with her errant children.

Reply to  milodonharlani
October 24, 2014 9:15 am

for formulating public policy they are useful.
in fact they are being used.

chriscafe
Reply to  John Boles
October 22, 2014 5:23 pm

Exactly! We need a lot more detailed explanation of climate models.
How do they approach the solution of these equations?
What assumptions do they use to ‘linearise’ the equations?
Decouple them?
Perturbaion theory in another guise?
There has been a great deal of illuminating scrutiny of paleoclimate work. Now s the time to place models under similar scrutiny.

Konrad.
Reply to  John Boles
October 22, 2014 8:52 pm

“Climate models fail big time, they are worthless”
They used to work for their intended purpose, propaganda, but now they are even failing at that.
What most don’t get is that GCMs were designed to give the wrong answer. GCMs do not actually have the power to do CFD (computational fluid dynamics) in the vertical dimension. Simple 2D mathematical models are used to parametrise vertical energy transports, both radiative and non-radiative. It is through these parametrisations the GCMs are told that CO2 = warming. They do not produce this answer via CFD. Essentially, they show warming because they are told to show warming.
Prior to the rise of the Church of Radiative Climastrology, the role of radiative subsidence in Hadley Ferrel and Polar tropospheric circulation cell was an accepted working theory in meteorology. A good explanation of the pre-hoax science can be found here –
http://www.st-andrews.ac.uk/~dib2/climate/tropics.html

The thermal factors concern the changing energy balance of the air as it flows polewards. Air convected to the top of the troposphere in the ITCZ has a very high potential temperature, due to latent heat release during ascent in hot towers. Air spreading out at higher levels also tends to have low relative humidity, because of moisture losses by precipitation. As this dry upper air drifts polewards, its potential temperature gradually falls due to longwave radiative losses to space (this is a diabatic process, involving exchanges of energy between the air mass and its environment). Decreasing potential temperature leads to an increase in density, upsetting the hydrostatic balance and initiating subsidence. The subsiding air warms (as pressure increases towards lower levels), further lowering the relative humidity and maintaining clear-sky conditions. However, although the subsiding air warms, it does not do so at the dry adiabatic lapse rate. Continuing losses of longwave radiation (radiative cooling) means that the air warms at less than the dry adiabatic lapse rate (i.e. some of the adiabatic warming is offset by diabatic cooling).

As you can see, radiation plays a critical role in governing the speed of tropospheric convective circulation. This circulation combined with evapotranspiration is the primary energy transport away from the surface. Alter radiative gas concentration and you will necessarily alter the speed of energy transport away from the surface. More CO2 = faster circulation.
This is why there was a flurry of pro-AGW radiative/convective model papers post 1990. They needed to write radiative subsidence out of history. They needed “immaculate convection”, convection that would not change speed with increasing CO2. If convective transport remains constant then models will show surface warming for increasing radiative gases. It is these recent flawed 2D mathematical models that are used to parametrise vertical energy flow in the GCMs.
The models quite simply were designed to give the wrong answer, and compared to satellite data they clearly do.

Duster
Reply to  Konrad.
October 23, 2014 2:01 pm

Konrad, thanks for posting that.

bobbyv
October 22, 2014 3:56 pm

Are there any serious measurements of below ground temperatures? Does the temperature of the earth fluctuate at all below 10m? How much heat can land hold – even beneath the ocean?

Pamela Gray
Reply to  bobbyv
October 22, 2014 4:13 pm

For agricultural purposes, yes there are. And the data goes back a century or more.
http://www.wcc.nrcs.usda.gov/scan/

Reply to  Pamela Gray
October 22, 2014 4:48 pm

Could Trenberth’s missing heat be there?

Pamela Gray
Reply to  Pamela Gray
October 22, 2014 5:47 pm

Probably not in the sense that Trenberth means what “IT” is that is hiding (he thinks its anthropogenic heat). It is true that soil temperature goes through short and long term oscillations like everything else. So yes, in the sense that sensible people understand soil can absorb heat, more in some decades, less in others. Ask a farmer. Soil temperature trends are a key component of planning future crops.

joeldshore
Reply to  bobbyv
October 22, 2014 5:46 pm

Yes, there are measurements…and even with very rough data, one can do basic back-of-the-envelope calculations to show that the intensity of heat that can be conducted up to the surface of the Earth in W/m^2 is very small compared to other things, such as the known radiative forcing due to doubling CO2 levels.

Reply to  joeldshore
October 22, 2014 6:43 pm

It’s the quantity of heat that matters. SInce the oceans hold 1000x more than the atmosphere, and ‘extra’ can be absorbed there, then watching the atmospheric temperature is almost meaningless concerning chaotic oscillations of total heat. If the soil can absorb/emit on that scale, it’s even more meaningless. I’m guessing it’s a pretty big number since the ground is coupled to the ocean too.

Konrad.
Reply to  joeldshore
October 22, 2014 9:30 pm

such as the known radiative forcing due to doubling CO2 levels

Joel,
about that little “known” thing….
Remember the old surface at 255K being raised 33K by “radiative forcing” claim? Doesn’t look too good if that 255K for “surface without atmosphere” receiving 240 w/m2 should be around 312K now does it?
Treating the oceans as SW opaque with emissivity and absorptivity near unity and constantly illuminated instead of SW translucent, IR opaque with absorptivity asymmetric with emissivity and intermittently illuminated can cause such fist-biting errors.

joeldshore
Reply to  joeldshore
October 23, 2014 1:31 pm

Yes, Konrad, I understand that not EVERYBODY accepts even well-known science. Some even deny basic physics. I know that you are in that camp….But, it is a pretty lonely place to be with no reputable atmospheric scientist or physicist (i.e., not even Fred Singer, Roy Spencer, or Richard Lindzen) to agree with you.

joeldshore
Reply to  joeldshore
October 23, 2014 1:34 pm

bobby: I am not sure why it is the quantity of the heat that matters. The temperature of our atmosphere is going to be determined by the RATE at which heat is absorbed and emitted. And, that rate is just too slow for transfers between the Earth’s interior and the atmosphere to make much difference.

Konrad
Reply to  joeldshore
October 23, 2014 5:32 pm

Alinsky techniques and call to authority Joel?
Which “well-known science” am I not accepting? Which “basic physics” am I denying?
The oceans are –
– SW translucent
– IR opaque
– Intermittently illuminated
– Internally convecting
– Have effective (not apparent ) IR emissivity lower than SW absorptivity
The only ways to model the ocean response to solar SW is CFD or empirical experiment. Climastrologists provably did neither to come up with that ludicrous 255K figure. They just applied standard e=a SB equations to an opaque BB surface illuminated by a constant 240 w/m2. Empirical experiment clearly shows they got it wrong by around 80C.
I doesn’t matter how many people accept that 255K figure or how famous they are, if it disagrees with empirical experiment it is wrong.

Duster
Reply to  bobbyv
October 23, 2014 2:31 pm

Below surface temperature increases at about 1-deg. F in 70 feet. There are all kinds of gotchas. Geothermal areas for instance start out hot and get hotter more rapidly. However,
http://www.ncdc.noaa.gov/data-access/paleoclimatology-data/datasets/borehole
provides data on temperature at various depths for a huge number of boreholes. Just grabbing one dataset, a quick run through gives very, very roughly one degree C increase per 100 meters. The plot is quite smooth, so any fluctuation is dampened very quickly.

bobbyv
Reply to  bobbyv
October 24, 2014 6:46 am

Joel: I admit I have no grasp of magnitude/intensity. How much can the ground temp vary over 100s or 1000s of years? If we are trying to tease out a few degrees over this time period, is it possible that this can contribute?

John Boles
October 22, 2014 4:15 pm

I like to think about deep time, and I wonder if, when the poles were melted, the rest of the world, nearer the tropics, was a bit cooler so that the average was about the same as today. I will not trust a climate model to tell me.

mpainter
Reply to  John Boles
October 22, 2014 5:04 pm

In fact, tropical conditions expand poleward. For example, during the Eocene, about 50 million years ago, London was a tropical rain forest. That is 50 N. latitude. We know this by fossils from the London Clay. At this time Anchorage was subtropical, with palms (about 60 N latitude). And from the Eocene of Ellesmere Island, at 80 N latitude, are the fossil stumps of metasequoia, “dawn redwood”.

milodonharlani
Reply to  mpainter
October 22, 2014 5:22 pm

Primates of modern aspect may well have arisen in the subtropical forests of Early Eocene Wyoming:
http://en.wikipedia.org/wiki/Notharctus_tenebrosus

u.k(us)
October 22, 2014 4:20 pm

I took a day hike (6 hours in, 6 hours out), you’ll quickly figure out who is in charge.
And the blisters last for 3 weeks, near as I can tell.
Do you prick that blister under the thick skin of your heel or not ?
I didn’t, and walked on it for 3 weeks before it went away.
Any better suggestions would be welcome 🙂

mpainter
October 22, 2014 5:07 pm

In fact, tropical conditions expand poleward. For example, during the Eocene, about 50 million years ago, London was a tropical rain forest. That is 50 N. latitude. We know this by fossils from the London Clay. At this time Anchorage was subtropical, with palms (about 60 N latitude). And from the Eocene of Ellesmere Island, at 80 N latitude, are the fossil stumps of metasequoia, “dawn redwood”.

Alx
October 22, 2014 5:45 pm

Sure, there would always be something missing, but with the aid of more data and the growing computational power…

Another “You got to be kidding me.” moment.
There would always be something missing. Ok science is never settled, well except for climate science. Ignoring that hypocrisy, the base issue is that climatology has demonstrated it is clueless about what’s missing, and even when gaps are known, alternately deny or ignore them.
But who cares? We have bigger computers!
There is more data, but it is unstable and manipulated constantly because it is mostly super wild ass guesses as to how the various data sources actually work and what they actually represent…
But who cares? We have bigger computers!
What a philosophy, if you can do it wrong, but do it over and over really fast, it comes out right. Sounds silly but that is exactly what is done when they average the average of model runs to get to their forecasts or predictions or trends or whatever they claim models do.

Tanya Aardman
October 22, 2014 6:12 pm

You’ve missed the Grandfather of Climatology – Willett!

milodonharlani
Reply to  Tanya Aardman
October 22, 2014 6:24 pm

IMO H. C. Willett was more of a meteorologist, however long range, than a climatologist.
IMO there are no two grandfathers of climatology, but if I were forced to pick one among American meteorologists (also atmospheric scientist & geologist), it would be Reid Bryson, who famously said, “You can go outside and spit and have the same effect as doubling carbon dioxide”.

markl
October 22, 2014 7:38 pm

By definition models can never explain beyond theory. The media with help of the scientists has misrepresented science to the people. Welcome to the information age.

October 22, 2014 8:34 pm

Footnote to an interesting blog: Koppen was Wegener’s father-in-law.

AndyE
October 22, 2014 9:51 pm

Brilliant – thanks Dr. Ball. The over-all view. That is real science: to comprehend the whole picture. Not myopically losing yourself in the details of specialities. To see the wood in spite of all the trees. But take heart : the over-all view will win out in the end – it is winning out as we write!

Dr. S. Jeevananda Reddy
October 22, 2014 11:12 pm

I would like bring to the notice of the group on my work relating to climatic classification agroclimatic classification carried out during late 70s and early 80s. This work, latter I put in to a book: Agroclimatic/Agrometeorological Techniques: As applicable to Dry-land Agriculture in Developing Countries — see at http://www.scribd.com or Google Book search [also available in many libraries — recommended for post-graduate studies in Agrometeorology & Agroclimatology]. 205 pages and published in 1993. The book review appeared in: Agric. For. Meteorol., 67: 325-327 [1994]. Climate change is a part of this book.
Dr. S. Jeevananda Reddy

David Schofield
October 23, 2014 12:33 am

Just when we needed supercomputers to tell us how bad things are, we invented supercomputers! How lucky was that!

knr
October 23, 2014 6:46 am

Actual its good question, out in a different way , where was Climate Research before AGW ‘proved’ by these models’ Answer nowhere , a poor relation to physical sciences , little cared about , poorly funded and making no headlines . And now ‘ there many working in this are that own their very careers to the manner in which these models ‘proved ‘ AGW and who know that a return to the bad old days is what await them should ‘the cause ‘ fall . Even setting aside the political ideologies that seem attracted to the subject like a month to a flame , you can easily figure out how they will react to the idea of ditching the models because their a failure .

chriscafe
Reply to  knr
October 23, 2014 4:39 pm

Where was climate research before modelling and CAGW? It was where the plodders from science courses found respite from having to compete with their peers.
Worse still, the advent of CAGW catapulted these plodders into positions of influence. You only have to look at their output to see the intellectual mediocrity.

Tom O
October 23, 2014 6:51 am

Excellent article. The statement was made in the comments that the trouble with models are that they are digital and the world is analog speaks very close to what I say frequently – mathematics can approximate reality, but reality can’t be “computed.’ Because equations can “seem” to reflect what is happening, it somehow becomes set I stone that the world can, in fact, be mathematically modeled and we can predict everything from those models. There is no equation that isn’t an approximation, even if it yields absolute values. Yu can factor in a million factors into any “climate model” you want and in the end, the Earth will still throw you curve balls because this really isn’t a hologram we live in, it is reality. and no, I can’t define reality and I doubt if anyone can.

gbaikie
October 23, 2014 8:22 am

Well since 70% of earth surface perhaps a classification of oceanic regions should be done.

Pethefin
October 23, 2014 9:11 am

Excellent post as it gives us a more holistic picture of climate related science and points out the disconnect between what was known and what is the current IPCC “science”.
Interestingly, the Köppen-Geiger model seems to have been used by the two mainstream climate scientist to illustrate the “projected” climate change:
http://koeppen-geiger.vu-wien.ac.at/

October 23, 2014 3:51 pm

Where Was Climate Research Before Computer Models?

– – – – – – –
It was in cooling mode.
John

mpainter
October 23, 2014 4:01 pm

Dr. Ball suggests here that climatology has regressed since the advent of computer modeling. I have no doubt that is so. Since the present generation of climate scientists took over the field, the science has been in a rut.

Tom T
October 24, 2014 10:03 am

The evolution of climate science kind of reminds me of the Borg.
In the 1960s Roddenberry saw technology as this great savior perfect in every way. 20 years latter he saw a dark side. The surrendering of humanity to technology. Modern climate scientists have surrendered their natural curiosity and humanity to computer models.
We are the Borg. You will be assimilated. Resistance is futile. Is pretty much the mantra of modern “climate scientists”

Michael 2
October 24, 2014 10:15 pm

In this discussion of averages and whether to create data points ex nihilo as a way to reduce error, let us consider the extreme case:
2 and 10. The average is 6 but “6” isn’t in the list; it reveals very little about the world. Standard Deviation 5.65 (www.wolframalpha.com/input/?i=standard+deviation+2+10)
2, 6, 6, 6, 6, 6, 6, 10. The average is still six, but now “6” appears many times (all invented of course) and 2, 10 are “outliers”. Standard Deviation = 2.13
Clearly, adding data points changes the Standard Deviation while not necessarily changing the average.
I think the models won’t operate without initial parameters so it is clearly a necessity.

October 26, 2014 10:01 pm

Thanks Tim.A great article. Nice to have Koeppen and Thornthwaite “revered” They were both very much part of the Climatology section I struggled with back in 1958 during completion of my Geography major. It was great also to read of Wegener and his theory of continental drift, later proved fact by Vine and Matthews [I think ] from Cambridge uni. Before that, I had been telling my year nine students about the “Fiery Girdle of the Pacific without having a clue why the Pacific was ringed by volcanoes. Continental drift, sea floor spreading and plate tectonics put that all very nicely into place. It was the most exciting time in my 34 years as a teacher of geography and geology between 1959 and 1992. Actual measurement will knock a model into a cocked hat every time. Cheers Tim.