On the scales of warming worry magnitudes– Part 2

Should We Worry About the Earth’s Calculated Warming at 0.7C Over Last the Last 100 Years When the Observed Daily Variations Over the Last 161 Years Can Be as High as 24C?

Guest post by Dr. Darko Butina

In Part 1 of my contribution I have discussed part of the paper which describes first step of data analysis known as ‘get-to-know-your-data’ step. The key features of that step are to established accuracy of the instrument used to generate data and the range of the data itself. The importance of knowing those two parameters cannot be emphasized enough since they pre-determine what information and knowledge can one gain from the data. In case of calibrated thermometer that has accuracy +/- 0.5C it means that anything within 1C difference in data has to be treated as ‘no information’ since it is within the instrumental error, while every variation in data that is larger than 1C can be treated as real. As I have shown in that report, daily fluctuation in the Armagh dataset varies between 10C and 24C and therefore those variations are real. In total contrast, all fluctuations in theoretical space of annual averages are within errors of thermometer and therefore it is impossible to extract any knowledge out of those numbers.

So let me start this second part in which I will quantify differences between the annual temperature patterns with a scheme that explains how thermometer works. Please note that this comes from NASA’s engineers, specialists who actually know what they are doing in contrast to their colleagues in modelling sections. What thermometer is detecting is kinetic energy of the molecules that are surrounding it, and therefore thermometer reflects physical reality around it. In other words, data generated by thermometer reflect physical property called temperature of the molecules (99% made of N2 and O2 plus water) that are surrounding it:

clip_image002

Let us now plot all of Armagh data in their annual fingerprints and compare them with annual averages that are obtain from them:

clip_image004

Graph 1. All years (1844-2004) in Armagh dataset, as original daily recordings, displayed on a single graph with total range between -16C and +32C

clip_image006

Graph 2. All years (1844-2004) in Armagh dataset, in annual averages (calculated) space with trend line in red

Please note that I am not using any of ‘Mike’s tricks’ in Graph 2 where Y-axis range is identical to the Y-axis range in Graph1. Since Graph 2 is created by averaging data in Graph 1 it has to be displayed using the same temperature ranges to demonstrate what happens when 730-dimensional space is reduced to a single number by ‘averaging-to-death’ approach. BTW, I am not sure whether anyone has realised that not only a paper that analyse thermometer data has not been written by AGW community, but also not a single paper has been written that validates conversion of Graph 1 to Graph 2 – NOT A SINGLE PAPER! I have quite good idea, actually I am certain why that is the case but will let reader make his/her mind about that most unusual approach to inventing new proxy-thermometer without bothering to explain to wider scientific community validity of the whole process.

The main reason for displaying the two graphs above is to help me explain the main objective of my paper, which is to test whether the Hockey stick scenario of global warming, which was detected in theoretical space of annual averages, can be found in the physical reality of the Earth atmosphere, i.e. thermometer data. The whole concept of AGW hypothesis is based on idea that the calculated numbers are real and thermometer data are not, while the opposite is true. Graph 1 is reality and Graph 2 is a failed attempt to use averages in order to represent reality.

The hockey stick scenario can be represented as two lines graph consisting of baseline and up-line:

clip_image008

The main problem we now have is to ‘translate’ 730-dimensional problem, as in Graph 1, into two-line problem without losing resolution of our 730-bit fingerprints. The solution can be found in scientific field of pattern recognition that deals with finding patterns in complex data, but without simplifying the original data. One of the standard ways is to calculate distance between two patterns and one of the golden standards is Euclidean distance, let’s call it EucDist:

clip_image010

There are 3 steps involved to calculate it: square difference between two datapoints, sum them up and take square root of that sum. The range of EucDist can be anywhere between ‘0’ when two patterns are identical and very large positive number – larger the number, more distant two patterns are. One feature of using EucDist in our case is that it is possible to translate that distance back to the temperature ranges by doing ‘back-calculating’. For example, when EucDist = 80.0 it means that an average difference between any two daily temperatures is 3.14C:

1. 80 comes from the square root of 6400

2. 6400 is the sum of differences squared across 649 datapoints: 6400/649=9.86

3. 9.86 is an average squared difference between any two datapoints with the square root of 9.86 being 3.14

4. Therefore, when two annual temperature patterns are distant 80 in EucDist space, their baseline or normal daily ‘noise’ is 3.14C

Let me now introduce very briefly two algorithms that will be used, clustering algorithm dbclus, my own algorithm that I published in 1999 and since then has become one of the standards in field of similarity and diversity in space of chemical structures, and k Nearest Neighbours, or kNN, which is standard in fields of datamining and machine learning.

Basic principle of dbclus is to partition given dataset between clusters and singletons using ‘exclusion circles’ approach in which user gives a single instruction to the algorithm – the radius of that circle. Smaller the radius, tighter the clusters are. Let me give you a simple example to help you in visualising how dbclus works. Let us build matrix of distances between every planet in our solar system, where each planet’s fingerprints contain distance to all other planets. If we start with clustering run at EucDist=0, all planets will be labelled as singletons since they all have different grid points in space. If we keep increasing the radius of the (similarity) circle, at one stage we will detect formation of the first clusters and would find cluster that has the Earth as centroid and only one member – the Moon. And if we keep increasing that radius to some very big number, all planets of our solar system would eventually merge into a single cluster with the Sun being cluster centroid and all planets cluster members. BTW, due to copyrights agreement with the publisher, I can only link my papers on my own website which will go live by mid-May where free PDF files will be available. My clustering algorithm has been published as ‘pseudo-code’ so any of you with programming skills can code in that algorithm in any language of your choice. Also, all the work involving dbclus and kNN was done on Linux-based laptop and both algorithms are written in C.

Let us now go back to hockey stick and work out how to test that hypothesis using similarity based clustering approach. For the hockey stick scenario to work you need two different sets of annual temperature patterns – one set of almost identical patterns which form the horizontal line and one set that is very different and form up-line. So if we run clustering run at EucDist=0 or very close to it, all the years between 1844 up to, say 1990, should be part of a single cluster, while 15 years between 1990 and 2004 should either form their own cluster(s) or most likely be detected as singletons. If the hockey stick scenario is real, youngest years MUST NOT be mixed with the oldest years:

clip_image012

The very first thing that becomes clear from Table [4] is that there are no two identical annual patterns in the Armagh dataset. The next things to notice is that up to EucDist of 80 all the annual patterns still remain as singletons, i.e. all the years are perceived to be unique with the minimum distance between any two pairs being at least 80. The first cluster is formed at EucDist=81 (d-81), consisting of only two years, 1844 and 1875. At EucDist 110, all the years have merged into a single cluster. Therefore, the overall profile of the dataset can be summarised as follows:

· All the years are unique up to EucDist of 80

· All the years are part of a single cluster, and therefore ‘similar’ at EucDist 110

Now we are in a position to quantify differences and similarities within the Armagh historical data.

The fact that any two years are distant by at least 80 in EucDist space while remaining singletons, translates into minimum average variations in daily readings of 3.14C between any two years in the database.

At the other extreme, all the years merge into a single cluster at EucDist of 110, and using the same back-calculation as has been done earlier for EucDist of 80, the average variation between daily readings of 4.32C is obtained.

The first place to look for the hockey stick’s signal is at the run with EucDist=100 which partitioned Armagh data into 6 clusters and 16 singletons and to check whether those 16 singletons come from the youngest 16 years:

clip_image014

As we can see, those 16 singletons come from three different 50-years periods, 3 in 1844-1900 period, 5 in 1900-1949 period and 8 in 1950-1989 period. So, hockey stick scenario cannot be detected in singletons.

What about clusters – are any ‘clean’ clusters there, containing only youngest years in the dataset?

clip_image016

No hockey stick could be found in clusters either! Years from 1990 to 2004 period have partitioned between 4 different clusters and each of those clusters was mixed with the oldest years in the set. Therefore the hockey stick hypothesis has to be rejected on bases of the clustering results.

Let me now introduce kNN algorithm which will give us even more information about the youngest years in dataset. Basic principle of kNN is very similar to my clustering algorithm but with one difference: dbclus can be seeing a un-biased view of your dataset where only similarity within a cluster drives the algorithm. kNN approach allows user to specify which datapoints are to be compared with which dataset. For example, to run the algorithm the following command is issued:

“kNN target.csv dataset.csv 100.00 3” which translates – run kNN on every datapoint in target.csv file against the dataset.csv file at EucDist=100.00 and find 3 nearest neighbours for each datapoint in the target.csv file”. So in our case, we will find 3 most similar annual patterns in entire Armagh dataset for 15 youngest years in the dataset:

clip_image018

Let me pick few examples from Figure 8: year 1990 has the most similar annual patterns in years 1930, 1850 and 1880; supposedly the hottest year, 1998 is most similar to years 1850, 1848 and 1855, while 2004 is most similar to 1855, 2000 and 1998. So kNN approach not only confirms the clustering results, which it should since it uses the same distance calculation as dbclus, but it also identifies 3 most similar years to each of the 15 youngest years in Armagh. So, anyway you look at Armagh data, the same picture emerges: every single annual fingerprint is unique and different from any other; similarity between the years is very low; it is impossible to separate the oldest years from the youngest years and the magnitude of those differences in terms of temperatures are way outside the error levels of thermometer and therefore real. To put into context of hockey stick hypothesis – since we cannot separate oldest years from the youngest one in thermometer data it follows that whatever was causing daily variations in 1844 it is causing the same variations today. And that is not due to CO2 molecule.

Let us now ask a very valid question – is the methodology that I am using sensitive enough to detect some extreme events? First thing to bear in mind is that all that dbclus and kNN are doing is simply calculating distance between two patterns that are made of original readings – there is nothing inside those two bits of software that modify or adjust thermometer readings. Anyone can simply use two years from the Armagh data and calculate EucDist in excel and will come up with the same number that is reported in the paper, i.e. I am neither creating nor destroying hockey sticks inside the program, unlike some scientists whose names cannot be mentioned. While the primary objective of the cluster analysis and the main objective of the paper were to see whether hockey stick signal can be found in instrumental data, I have also look into the results to see whether any other unusual pattern can be found. One year that has ‘stubbornly’ refused to merge into the final cluster was year 1947, the same year that has been identified as ‘very unique’ in 6 different weather stations in UK, all at lower resolution than Armagh, either as monthly averages or Tmax/Tmin monthly averages. So what is so unusual about 1947? To do analysis I created two boundaries that define ‘normal’ ranges in statistical terms know as 2-sigma region and covers approximately 95% of the dataset and placed 1947 inside those two boundaries. Top of 2-sigma region is defined by adding 2 standard deviations to the mean and bottom by taking away 2 standard deviation from the mean. So any datapoints that venture outside 2-sigma boundaries is considered as ‘extreme’:

clip_image020

As we can see, 1947 has most of February in 3 sigma cold region and most of August in 3 sigma hot region illustrating the problem with using abstract terms like abnormally hot or cold year. So is 1947 extremely hot or extremely cold or overall average year?

Let me finish this report with a simple computational experiment to further demonstrate what is so horribly wrong with man-made global warming hypothesis. Let us take a single day-fingerprint, in this case Tmax207 and use the last year, 2004 as an artificial point where the global (local) warming starts by adding 0.1C to 2004, then another 0.1C to the previous value and continue that for ten years. So the last year is 1C hotter than its starting point, 2004. When you now display daily patterns for 2004+10 artificial years that have been continuously warming at 0.1C rate you can immediately see a drastic change in the overall profile of day-fingerprints:

clip_image022

What would be worrying, if Figure 10 is based on real data is that a very small but continuous warming trend of only 0.1C per annum would completely change the whole system from being chaotic and with large fluctuation into a very ordered linear system with no fluctuations at all.

So let me now summarise the whole paper: there is not a single experimental evidence of any alarming either warming or cooling in Armagh data, or in sampled data from two different continents, North American and Australia since not a single paper has been published, before this one, that analysis the only instrumental data that do exists – the thermometer data; we do not understand temperature patterns of the past or the present and therefore we cannot predict temperature patterns of the future; all temperature patterns across the globe are unique and local and everything presented in this paper confirms those facts. Every single aspect of man-made global warming is wrong and is based on large number of assumptions that cannot be made and arguments that cannot be validated: alarming trends are all within thermometer’s error levels and therefore have no statistical meaning; not a single paper has been published that have found alarming trends in thermometer data; and not a single paper has been published validating reduction of 730-dimensional and time dependent space into a single number.

Let me finish this report on a lighter note and suggest very cheap way of detecting arrival of global warming, if it ever does come to visit the Erath: let us stop funding any future work on global warming and instead simply monitor and record accuracy of next day temperatures instead! If you look at the above graph, it becomes obvious that once the next day temperature predictions become 100% accurate it will be clear and unequivocal sign that the global warming has finally arrived using following logic:

· chaotic system=no warming or cooling=0% next day prediction accuracy

· ordered-linear system=global warming=100% next day prediction accuracy

And let me leave you with two take-home messages:

· All knowledge is in instrumental data that can be validated and none in calculated data that can be validated only by yet another calculation

· We must listen to data and not force data to listen to us. As they say, if you torture data enough it will admit anything.

===============================================================================

Dr Darko Butina is retired scientist with 20 years of experience in experimental side of Carbon-based chemistry and 20 years in pattern recognition and datamining of experimental data. He was part of the team that designed the first effective drug for treatment of migraine for which the UK-based company received The Queens Award. Twenty years on and the drug molecule Sumatriptan has improved quality of life for millions of migraine sufferers worldwide. During his computational side of drug discovery, he developed clustering algorithm, dbclus that is now de facto standard for quantifying diversity in world of molecular structures and recently applied to the thermometer based archived data at the weather stations in UK, Canada and Australia. The forthcoming paper clearly shows what is so very wrong with use of invented and non-existing global temperatures and why it is impossible to declare one year either warmer or colder than any other year. He is also one of the co-authors of the paper which was awarded a prestigious Ebert Prize as best paper for 2002 by American Pharmaceutical Association. He is peer reviewer for several International Journals dealing with modelling of experimental data and member of the EU grants committee in Brussels.

0 0 votes
Article Rating
103 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
A C Osborn
April 17, 2013 1:25 pm

He makes such elegant sense.

Rhoda R
April 17, 2013 1:47 pm

I have to agree AC. One of the recurring problems has been the naive approach to data.

April 17, 2013 1:47 pm

How is it possible that the same brain that devised dbclust, which sound like a useful procedure, can also believe that Figure 10 is a reasonable expectation of that will happen under global warming. Why should weather stop when global warming starts? Annual fluctuations should be superimposed on the trend, not replaced by the trend.
Part one of this series was silly, this is even sillier.

HAS
April 17, 2013 2:00 pm

“1047” should be “1947” just above Fig. 9
[Reply: Fixed, thanks. -ModE ]

geran
April 17, 2013 2:16 pm

Quote from paper: “Every single aspect of man-made global warming is wrong and is based on large number of assumptions that cannot be made and arguments that cannot be validated: alarming trends are all within thermometer’s error levels and therefore have no statistical meaning; not a single paper has been published that have found alarming trends in thermometer data; and not a single paper has been published validating reduction of 730-dimensional and time dependent space into a single number.”
There you are, MSM, “front-page” news and lead-off stories all done for ya.
(Don’t anyone hold their breath….)

Janice Moore
April 17, 2013 2:21 pm

Bravo, Dr. Butina! Thank you for so generously sharing (and so patiently explaining) the fruits of your labor with us. That you are a master of your subject is proven, I believe, by the fact that a non-science major like I understood (well, I THINK I understood!) most of what you wrote. You are not only a fine scientist, but a teacher par excellence. A bright scientist can discover truth, but only a true master (and I consider “master” in this context to include both males and females) can teach it.
Q. Would it be more accurate to change: “…once the next day temperature predictions become 100% accurate it will be clear and unequivocal sign that the global warming has finally arrived…” (from “lighter note” at end of paper) TO READ: “… once the [predictions of] next day’s [temperature relative to the previous day’s temperature] become 100% accurate it will be clear… .” ? That is to 100% accurately state: either “Tomorrow, it will be warmer,” or “Tomorrow it will be cooler.” I AM TRYING TO SAY WHAT I MEAN (I know I mean what I say… arrgh) #:)
If the absolute temperature value needs to be predicted, if you would be so kind, please explain. Thanks!

Alexander K
April 17, 2013 2:26 pm

Elegant, simply presented and absolutely brilliant! Answers a lot of questions that I have had about the silliness that passes for science in some quarters.

Ken Harvey
April 17, 2013 2:26 pm

Beside the need to recognise the instrument limitations, some sort of allowance needs to be made for reading inaccuracies. I have no opinion as to where that allowance should be set but any allowance at all would have the effect of flattening any trend line.

April 17, 2013 2:28 pm

Dr. Darko Butina,
I found your web site, and the pdf describing dbclus, but didn’t see a reference to pseudo-code or c code, could you kindly point me in the right direction?

Robert Wykoff
April 17, 2013 2:34 pm

Only 24C? I have personally witnessed 105F temperatures in the day time in the Black Rock Desert drop to 18F overnight with all water not in coolers frozen solid at dawn. A drop of 48C in 12 hours. This is not super uncommon.

jorgekafkazar
April 17, 2013 2:46 pm

Fascinating.

David Jay
April 17, 2013 2:50 pm

Robert Wycoff:
In case you missed Part 1, the author is working with the Armagh data set, not the Black
Rock Desert data set.

Joe Public
April 17, 2013 2:54 pm

Robert Wykoff at 2:34 pm
Dr Butina’s work relates to a single weather station daily dataset collected at Armagh Observatory (UK) between 1844 to 2004.

Allencic
April 17, 2013 3:18 pm

I’ve just never understood how such a tiny rise in temperature could be converted into the most horrible event in the history of mankind. How could so many be so thoroughly fooled? Of course, it should never be forgotten that half of the people on Earth are below average in intelligence.

Doug Proctor
April 17, 2013 3:18 pm

The crock gets smellier the more you think about it from a real-world point-of-view.
I suppose the witchcraft mania got worse the more the skeptic thought about it, too, and the obvious insanity of it must have boggled them the same way CAGW is boggling us.
If you did the same thing with the ocean heat content and temperature, you would find this: that the reality is just a statistical artefact without any meaning other than to academics pursuing ideology and grants.
I am reminded of the arguments about how many angels can dance on the head of a pin.

Alan S. Blue
April 17, 2013 3:34 pm

It would also be useful for an experimentalist to analyze the appropriateness of propagating the 0.5C ‘instrumental error’ when the thermometer is used to provide the annual average gridcell temperature.
People prominent in the debate feel a mere 67 thermometers is sufficient coverage.

tobias smit
April 17, 2013 3:48 pm

Not in disrespect but I had a chuckle when the Dr, says that he took his calibration and how a thermometer works from NASA engineers that “actually know what they are doing”
must have been after Hansen left

April 17, 2013 3:54 pm

richard telford says:
April 17, 2013 at 1:47 pm
How is it possible that the same brain that devised dbclust, which sound like a useful procedure, can also believe that Figure 10 is a reasonable expectation of that will happen under global warming. Why should weather stop when global warming starts? Annual fluctuations should be superimposed on the trend, not replaced by the trend.
Part one of this series was silly, this is even sillier.
######################
ya I didnt think it was possible to outdo the howlers in the first part.

April 17, 2013 3:58 pm

From Darko Butina
My website darkobutina@l4patterns.com should be live this weekend (it only has test-run page at the moment) with the full paper (free) and dbclus paper (as pseudo code) as well. May I express my gratitude to Anthony for his vision and dedication to science to allow my paper to be presented on this website knowing that it will upset all those readers that use global averages as reference points to everything and have forgotten to look into instrumental data. I salute you Anthony.
Darko Butina

Dr Burns
April 17, 2013 4:22 pm

Dr Butina,
What is the effect of mixing multiple data sets, from the pole to the equator, mixing summer and winter temperatures, where 30% of the data sets show cooling trends, to look for a hockey stick in the fictitious “average global temperature” ?

george e. smith
April 17, 2013 4:25 pm

I would recommend to Dr. Butina, that he consider, that on any typical northern midsummer day, the hottest surface Temperature on earth could be as high as +60 deg C (140 deg F), with air Temperature of +55 deg C. At the other end of the earth, is is Antarctic winter midnight, and Temperatures could be as low as about -90 deg C, at places like Vostok Station. (-130 deg F).
So that’s a possible extreme daily range of 150 deg C, and due to an argument by Galileo, Galilei, there must be some spot somewhere, where any Temperature in that range could be found. In fact, there are an infinite number of such points.
So your 24 deg C is just the tip of the iceberg.
).7 deg Cin 150 years is peanuts.

davidmhoffer
April 17, 2013 4:26 pm

Well the first article didn’t make sense to me, and this one doesn’t either. Even the analogy isn’t apt. Planets are physical bodies that exist at any given time in a specific location in 3 dimensional space. Likening that to temperature data is nonsensical. Telford’s points are also valid. I think I see the value of your approach for certain applications, this just isn’t one of them.

Mark Ryan
April 17, 2013 4:42 pm

I have to agree with the incredulous responses. This paper reminds me of Xeno’s paradox, the argument that in order to move from one point to another, we must first cross the half-way mark…and since any number can be halved, then halved again to infinity, it is clearly impossible to move from one place to another and the world therefore IS NOT REAL.
Dr Butina is of course also refuting the concept of seasons. I follows from this paper that no living person has ever truly witnessed summer follow from spring.

Jim W
April 17, 2013 4:53 pm

You are right on. You dun good.

trafamadore
April 17, 2013 5:16 pm

“Should We Worry About the Earth’s Calculated Warming at 0.7C Over Last the Last 100 Years When the Observed Daily Variations Over the Last 161 Years Can Be as High as 24C?”
wow. let the clueless inspire. And, I am sure some will find this inspiring.

ferdberple
April 17, 2013 5:34 pm

richard telford says:
April 17, 2013 at 1:47 pm
How is it possible that the same brain that devised dbclust, which sound like a useful procedure, can also believe that Figure 10 is a reasonable expectation of that will happen under global warming.
==============
The author has reproduced exactly what he said in Fig 10. The problem is that the chaotic fluctuation in the rest of the graph can also reproduce what you say a warming trend will look like simply by chance or error. So there is nothing reliable in your example to distinguish between chaos and warming.
That is the point of the paper. No one has established that what you say warming will look like is in fact a reliable signature of warming. Instead climate science has gone ahead on the assumption that the math is correct, that warming will look like “X”, without ever laying the foundations.
The author is challenging you to re-examine the most basic assumptions that underlie climate science. Assumptions that have never been demonstrated to be correct. Where is the paper that shows “X” is a reliable signature for CO2 warming? Where is the mathematical rigor?
One we have established a reliable signature for CO2 warming, then we can examine the data to see if in fact the signature exists.

Chuck Nolan
April 17, 2013 5:35 pm

Thanks Dr. Butina.
I would think a lot of areas in the US would have data from 1844 or longer. I believe they’ve been collecting data in every capital city for a long time.
I wonder how it would look if this same chart were displayed for each station worldwide on a billboard near that station.
Caption “This is Global Warming
with this graph right in front of everybody.
If there is station warming due to UHI, instead just show before and after aerial pix with the caption. No graph would be necessary or valid because the data is contaminated with UHI.
As a layman this is how I’ve looked at global warming.
I see this graph as common sense and therefore easily understandable. It’s been something missing from this debate since the beginning, visual believe-ability.
It’s what people need to see to counter the Team and polar bears falling from the sky.
This could get interesting.
cn

Chuck Nolan
April 17, 2013 5:49 pm

I have to chuckle at some of the comments against the paper.
They still think we’re debating in the scientific arena instead of the PR era.
Get real.(?)
You can’t get real because the members of the Team don’t like science.
They like visuals of children exploding and buildings flooding.
cn

ferdberple
April 17, 2013 5:52 pm

george e. smith says:
April 17, 2013 at 4:25 pm
).7 deg Cin 150 years is peanuts.
===========
humans would be very hard pressed to build a machine that has as little drift over 150 years.

ferd berple
April 17, 2013 5:54 pm

????
My posting in answer to
richard telford says:
April 17, 2013 at 1:47 pm
has disappeared
[Reply: Rescued & posted. — mod.]

Simon
April 17, 2013 6:04 pm

April Fools all month long.

H.R.
April 17, 2013 6:06 pm

Dr. Butina says near the end “[…] all temperature patterns across the globe are unique and local and everything presented in this paper confirms those facts. […]”
==============================================================
I’d think the temperature patterns at my house are unique compared to the house next door, if we had thermometers fine enough.
Thank you very much for introducing me to dbclus.

April 17, 2013 6:18 pm

Very elegant discourse from Dr Darko … uncomplicated yet sufficiently analytical … could this be the cornerstone for the destruction of the theory of CGAW ? I certainly hope so, let the world economies return to growth and mankind to prosperity.

Ttrafamadore
April 17, 2013 6:21 pm

[snip – banned trafamadore@gmail.edu is a fake email address
MX record about ‘gmail.edu’ does not exist.
-mod]

ferd berple
April 17, 2013 6:38 pm

davidmhoffer says:
April 17, 2013 at 4:26 pm
Even the analogy isn’t apt.
=======
Your postings are typically of higher quality than this. At a guess this methodology is so different than your training it is rubbing the wrong way. You know it wasn’t an analogy. It was an example of pattern matching. I say cut the author some slack. He is making some interesting points from a fresh point of view.
I remember my math prof from almost 40 years ago. I presented him with a very elaborate statistical analysis of a problem we were both working to solve. There were millions of dollars at stake, which was serious coin back in the day. He looked over my work and fastened his attention on the one graph in the material. When I asked him about the statistics I had spent some many hours and days over, he pointed to the graph. “This I trust”, he remarked. “The rest not so much”.

Mike Bromley the Kurd (this week)
April 17, 2013 6:42 pm

What’s with the ratings?

ferd berple
April 17, 2013 6:49 pm

Mike Bromley the Kurd (this week) says:
April 17, 2013 at 6:42 pm
What’s with the ratings?
=========
I thought the same thing. It is perhaps a form of peer moderation, though it might be subject to abuse. I rather like the notion that WUWT will allow contrary points of view as say compared to RC and others that censor anything that questions the accepted doctrine, so I hope the ratings don’t drive folks away that would otherwise post unpopular remarks.

BarryW
April 17, 2013 6:52 pm

Dr. Butina, since you’re showing that your algorithm rejects a hockey stick in the temperature data, have you run your algorithm against the model runs’ data to see that they do show the clustering that you expect from a hockey stick shaped data set? I would think that the first argument against your method would be that the models might also fail to show the clustering you expect even though they show the “hockey stick”.

George
April 17, 2013 6:54 pm

0.7 degrees Celsius per century is more than 20 times the observed increase in global temperature since the end of the Pleistocene. It is not a trivial amount – it is a huge amount. And clearly fictitious. The observed (geothermometer) temperature increase, based on many publications, is in the range of 3 or 4 degrees C. since the end-Pleistocene low. And the Paleocene-Eocene Maximum calculates to about 15 degrees C. above the present average.
Geology is Destiny.

Allen63
April 17, 2013 6:56 pm

I have never read about the subject approach. Seems promising. I have to “get my head around it”.
I too have wondered where is the concrete direct proof that the global temperature anomaly can be “accurately” ascertained from the “original uncorrected” temperature records — by the methods Climate Scientists use. By “accurately”, I mean accurately enough to know that there actually has been a human-caused temperature increase in the last 50 or so years.
In any case, using “averages and standard deviations” to prove things involves “assumptions” — that averages and standard deviations actually describe the system. But averages and standard deviations are merely “mathematical constructs” — that may not correspond to a genuine physical reality in a given real-world data set.

OssQss
April 17, 2013 7:14 pm

I want to thank those who post here. WE do share the same desire in the end. TRUTH and FACTS do make a difference. There are valid arguments from all sides of every equation. Just look at the focus on water use with respect to Ethanol in a prior post. Valid, yet avoids the fundamental problem of ” It takes more energy when all is considered to plant it, grow it, fertilize it,maintain it, harvest it, ship it, and finally buy it” than it provides as compared to other sources of energy.
Fundamentals are the decision makers of future policy. Policy makes your life happen the way you see it right now.
Just like my Grandma always told me
“Always tell the truth and you never have to remember what you said”
Do you think most of those CAGW Scientists could pass a lie detector?
They will not even devbate the topic for it would impact the ideological policy they are directed to support through (FUNDING) what they provide. LOL!
I can only speak for myself, but I appreciate every single post here, along with every comment.
Perspective and understanding ultimately come to a juncture, and this is the place that happens for many more than myself.
Am I wrong?

April 17, 2013 7:40 pm

Chris Telford & Steve Mosher, Fred B is right. Fig 10 is pattern matching. The actual data shows no pattern in the relationships between years. When Dr. Butina imposed a pattern(.1deg C per year for 10 years) the change in the data pops right out at a linear trend. The whole point is that if there were any consistent trends in portions of the data the method used can clearly capture them. No trends appear because there aren’t any in the data.

April 17, 2013 7:42 pm

Dr. Butina should team up with William Briggs. They’d make a great investigative team.

john robertson
April 17, 2013 7:55 pm

Thank you Darko Butina,Well expressed.
What we have recorded over the years is the thermometer reading.
Can climatology’s global warming be extrapolated from these records?
I agree with your statement, less than the error range is not information, what ever climatology(the cause) is doing it is not supportable from this temperature data.
As many commentators have pointed out over the years, the temperature record is what we have, it is not sufficient to allow us to make claims of trends or certainty.
Obsessing over 0.5 to 0.7 C change, against error bands that a generous soul would allow as +-0.5C
is as useful as seeking Jesus in the TV white noise or debating on the number of angels dancing on a pin head.
As for what the weather/climate is doing and about to do, best we can say is “Don’t know”.
Previous behaviour is written coarsely upon the rocks of this planet, recent cycles are sketchily writ in human history, there are ghosts of patterns apparent, but we seek to see patterns where none may be.
However I now believe science was never important to the ideology,science was just the most credible institution available to to cloak a lust for power and wealth.

Reg Nelson
April 17, 2013 7:56 pm

trafamadore says:
April 17, 2013 at 5:16 pm
“Should We Worry About the Earth’s Calculated Warming at 0.7C Over Last the Last 100 Years When the Observed Daily Variations Over the Last 161 Years Can Be as High as 24C?”
wow. let the clueless inspire. And, I am sure some will find this inspiring.
____________
Color me clueless then. How can anyone claim a warming of 0.7C over the last century, when that kind of precision didn’t exist in the instruments that were used to record the data for the majority of the century? Furthermore they were documented by fallible human eyes, and at different times of the day\night.
Whatever happen to significant digits? Do they not exist in the Bizzaro Climate Science World?
You seem to mock what you don’t understand or can’t explain.

Chuck Nolan
April 17, 2013 8:00 pm

btw I like the ratings.
It’s nice to have a way to concur with a quick nod to a fellow traveler.
cn

Mike Bromley the Kurd (this week)
April 17, 2013 8:03 pm

ferd berple says:
April 17, 2013 at 6:49 pm
I concur. It’s kind of like a mini c-word on every post. 97% Liked….er, drat, WUWT ain’t Farcebook.
I get from the article (despite the detractors, and some of their vailid good points) is that an 0.7-degree temp difference is not perceivable….and basically unimportant. But more: the whole shebang rests on a calculated number, i.e. a model of sorts, which states: “a temperature difference is alarming”. It’s not. Especially a difference arrived at by averaging measurements. A circumstance that seems to blatantly ignore error bars and exaggerate the rise or decline, by amplifying the Y axis, all the while misusing centigrade degrees as a misrepresentation of Kelvin. The hockey stick is basically masquerading as a plank in a platform of smokescreens.

Manfred
April 17, 2013 8:22 pm

This may sound facile to all you mathematical and statistical cognoscenti out there, but could someone please explain to me when global mean temperatures are stated why is it that the range and the standard deviation for a given mean value are not stated together with that value? The mean on its own is (in my branch of science) seen as meaningless.

April 17, 2013 8:39 pm

From Darko Butina
Brief point to make is that some readers have either miss-read or miss-interpreted importance of number and magnitude of switch-overs when comparing any two years. I have started by comparing 2004 vs 1844 but went to say that I have written a special program to compare every year with every other year in dataset which means doing 161×160 comparisons which clearly shows that the switch-over patterns is the norm. Let me give you some numbers: year 2004 is on 399 occasions warmer BUT on 250 occasions colder than year 1844 (160 years difference) with maximum difference in one direction of 10.9C and 8.8C in other. At Waterloo, year 2009 is on 239 occasions warmer and 375 occasions colder than year 1998 (11 years difference), with total range of switch-over of 38.5C, while at Melbourne, 2009 was 219 times warmer and 146 times colder than 1998 (11 years difference) with total range of 43.2C (21.7C in one direction and 21.5 in another). That to me means two things – the switch-over is happening every few days and coupled with the sheer magnitude of those switch-overs it is impossible to declare one year either warmer or colder, if you look in thermometer data and if you are not playing some silly-numbers game. Since the same patterns have been observed on two different continents, using any scientifically based logic, one would have to come to conclusion that those patterns should be found at all other weather stations that record temperatures on daily bases. Following the standard practices in experimental sciences, I have asked readers to be sceptical and to prove me wrong, not by expressing their opinions on global warming or annual temperatures, but by actually looking into thermometer data. I even offered the award for the first person who proves me wrong. And can I emphasize again, it is not me who is claiming that there is unequivocal global warming, that is official line of man-made global warming community – all I am saying is that I cannot find either warming or cooling in thermometer data and that nobody has bothered to look in thermometer data before me and report that work. So please, look at thermometer data of weather station of your choice, compare aligned daily readings of any two years and try to explain, to yourself first, those graphs and then let all know, for example – if 2004 is warmer than 1844, if yes, how much and how did you get the number. And please bear in mind the following: that thermometer data came first and annual averages have been derived from those thermometer data, so when those who invented annual averages could not find warming trends in thermometer data – and that is the whole point of my paper, the simply threw away thermometer data and miracle happened – they suddenly found all the trends that were needed to start the religion of man-made global warming.
So, please look at the data and enlighten us all where is that alarming global warming hiding!
Darko Butina

davidmhoffer
April 17, 2013 8:42 pm

ferd berple says:
April 17, 2013 at 6:38 pm
davidmhoffer says:
April 17, 2013 at 4:26 pm
Even the analogy isn’t apt.
=======
Your postings are typically of higher quality than this. At a guess this methodology is so different than your training it is rubbing the wrong way. You know it wasn’t an analogy. It was an example of pattern matching. I say cut the author some slack. He is making some interesting points from a fresh point of view
>>>>>>>>>>>>>>>>>>>>>>>>>
I did cut him some slack. I could have been much harder on him. His example using planets and their distance from each other is his analogy, not mine. His analogy defines clusters of objects separated by 3 dimensional space. The climate data that he then analyses is a series of data points separated by time which is a linear unidirectional dimension.
As for his artificial trend that he put on the data, that it totally misleading. He’s made the assumption that any warming must appear as a uniform linear trend. He’s correct that if it did, he’d pick it up using this method. The problem is that he’s made a false assumption. In a chaotic system of unknown complexity, you simply cannot make that assumption. The amount of warming from any given forcing can, in fact must vary over time depending on the sum total of all other conditions at any given time. Allow me the following example:
Do clouds cause warming or cooling?
The answer is, it depends. When incoming insolation is higher than outgoing earth radiance (say noon in the summer) a cloud cutting in front of the sun is a clear cooling agent. But at night, when incoming insolation falls to zero, a cloud keeps earth surface warmer than it would otherwise be.
So the answer to the question is “yes”.
While the author has provided a method to identify a uniform forcing, he cannot state categorically that the warming from CO2 is uniform. It most likely isn’t, particularly when one considers feedbacks, which are also not uniform. He is in effect testing for a condition that physics says probably doesn’t exist.

April 17, 2013 9:01 pm

Dr. Butina, I have plotted thermometer records for nearly 90 US cities at the link below. I can send the data to you, if it would be useful. Some cities show a slight warming, some show no trend, and a few show cooling.
http://sowellslawblog.blogspot.com/2010/02/usa-cities-hadcrut3-temperatures.html?m=0
Your article is very interesting and I thank you for posting it here.
Best regards,
Roger Sowell

davidmhoffer
April 17, 2013 9:04 pm

Manfred says:
April 17, 2013 at 8:22 pm
This may sound facile to all you mathematical and statistical cognoscenti out there, but could someone please explain to me when global mean temperatures are stated why is it that the range and the standard deviation for a given mean value are not stated together with that value? The mean on its own is (in my branch of science) seen as meaningless.
>>>>>>>>>>>>>>>
Now there’s a valid point, and one that the heroes of the IPCC simply will not answer. It was probably my first clue than something was amiss in the climate debate when I first got interested in it years ago. I kept asking “where are the error bars?” on what I though were science sites, only to get slapped down with ridicule or snipped altogether. It was several years of frustration before I discover WUWT and other sites where that and other issues were being raised.
I’ll go one further though, and challenge anyone to come up with a way to calculate average temperature of the earth in the first place, and how it relates to average insolation and the SB Law temperature that it should produce. It can’t be done. I’ve demonstrated in the past with simple physics that the earth could be gaining energy while exhibiting a lower temperature. Average insolation and average temperature on an oblate sphere rotating in space with an axial tilt and an orbit shaped like an egg but with ripples in it from Jupiter and Saturn….meaningless numbers. They simply cannot be averaged in any meaningful way.

davidmhoffer
April 17, 2013 9:21 pm

darkobutina says:
April 17, 2013 at 8:39 pm
I have asked readers to be sceptical and to prove me wrong, not by expressing their opinions on global warming or annual temperatures, but by actually looking into thermometer data. I even offered the award for the first person who proves me wrong.
>>>>>>>>>>>>>>>>
Sucker bet. It cannot be won because temperature is meaningless in this context. You cannot tell if CO2 or any other forcing is warming the earth because the direct effect of any forcing is measured in w/m2, not degrees. I cannot be measured in degrees. The following example is illustrative. Consider two points on earth at temps of 280 and 300 degrees Kelvin, for an average of 290. Via SB Law:
P=5.67*10^-8*T^4
They would be at 348.5 and 429.4 w/m2 respectively.
The average of which would be 389.0 w/m2
Which by SB Law would not be an average temperature of 290, but 287.8 K.
To further illustrate, suppose that the cold temp (280) went up by one degree and the warm temp (300) went down by one degree. The average temp by calculating (281+299)/2 still yields 290. But the w/m2 has actually gone up by 5.0 w/m2 in the cold region and down by 6.1 w/m2 in the warm region, so this two thermometer planet supposedly has the exact same temperature despite losing an extra 0.5 w/m2 on average.
Which is why analyzing temperature data to death with ANY statistical method will tell you exactly nothing about energy balance and what CO2 does or does not do.
I’m a raging skeptic over this whole climate mess, so I’d be very happy if this article was proof of my point of view. But it isn’t.

davidmhoffer
April 17, 2013 9:32 pm

OK I messed up the math in my previous post.
280 = 348.5 w/m2, 300=459.3 w.m2 and the average comes out to 290.5 degrees K.
Its late, and I appear to have the Excel skills of Phil Jones at this point in the evening. But my point stands. With no linear relationship between w/m2 and degrees, you just cannot draw any conclusions about energy balance and warming or cooling from temps alone.

April 17, 2013 9:52 pm

Darko Butina to Roger,
Your comment highlights this horrible state that the whole of climate community is at. Annual temperatures are NOT temperatures, they have no physical properties, they do not exists and they cannot be measured. Ask yourself, how do you validate Hatcrut data? Well, by another some ‘rigorous’ statistics, i.e. you validate one calculation by another calculations! people have to realize that annual average was not invented for any scientific reason, but as a desperate attempt to find correlation between temperatures and few molecules of CO2 that are generated by burning fossil fuels while ignoring vast majority of CO2 molecules that are produced by nature. since they could not find any warming trends in thermometer readings, they invented a parameter that cannot be validated by any instrument and therefore create situation where everyone can publish at will and cannot be proven wrong. Any trend analysis using something that does not exists, like annual averages, is totally useless and and I could categorically state here with 100% certainty that any model using annual average as a reference point will have 0% predictive power. And how do I know that – because I spent 20 years doing predictive modelling in market driven sector – that is sector where the predictive power of model is NOT judged by R^2 but with the experimental results. Difference between this mickey-mouse modelling of future temperature trends and modelling in market driven sector is that the first case cannot be validated for another 100 years while second case has to produce end product. Hope this helps.

DaveA
April 17, 2013 10:05 pm

Houston we may have a problem.
If solar output steadily increased:
a) Would the Earth’s average temperature steadily increase?
b) Would this method conclude (a) is not happening?
I have a hunch the answers are Yes and Yes.

April 17, 2013 10:11 pm

Darko,
If “annual temperatures are NOT temperatures”, are monthly temperatures temperatures? How about daily temperatures? Hourly? Minute? Second? Nanosecond? Given that measurements are (nominally) instantaneous, averaging them over any period of time necessarily requires some assumptions. Anyone sufficiently interested could rather easily come up with tests (using either real or synthetic data) to see the extent to which averaging temperatures over different temporal resolutions can introduce uncertainty.

davidmhoffer
April 17, 2013 10:20 pm

Darko,
http://ocean.dmi.dk/arctic/meant80n.uk.php
Compare 1960 to 2012. By your method there’s been warming and plenty of it.

davidmhoffer
April 17, 2013 10:26 pm

Darko;
And how do I know that – because I spent 20 years doing predictive modelling in market driven sector –
>>>>>>>>>>>>>>>>
Ah, I think I see the problem. You think that predictive modelling of a market somehow equates to predictive modelling of physics? I know english isn’t your first language, but if that is what you meant… sorry, doesn’t work that way.

ferd berple
April 17, 2013 10:42 pm

davidmhoffer says:
April 17, 2013 at 10:26 pm
sorry, doesn’t work that way.
==========
apparently not all agree
http://arxiv.org/ftp/arxiv/papers/0805/0805.3426.pdf
Dynamical systems in nature such as atmospheric flows, heartbeat patterns, population dynamics, stock market indices, DNA base A, C, G, T sequence pattern, etc., exhibit irregular space-time fluctuations on all scales and exact quantification of the fluctuation pattern for predictability purposes has not yet been achieved.

Mark Aurel
April 17, 2013 10:47 pm


“degrees, you just cannot draw any conclusions about energy balance and warming or cooling from temps alone.”
What?
When we are talking about a warming climate what are we talking about if not a rising temperature.
Despite all your verbose but obscure posting on this you avoided the basic tenet of his theoretical proposal, ie. if the temperature is rising at a steady 0.1C yearly then despite the chaotic nature of weather, at the end of the 10 year period the trend line would simply have to be inclined upwards.
As to the planets and temp It was not a comparison but an example.
What is it the you so detest in this quest post?

ferd berple
April 17, 2013 10:55 pm

more on the mathematical errors of naively applying statistics to climate
http://arxiv.org/ftp/arxiv/papers/0805/0805.3426.pdf
The Gaussian probability distribution used widely for analysis and description of large data sets underestimates the probabilities of occurrence of extreme events such as stock market crashes, earthquakes, heavy rainfall, etc. The assumptions underlying the normal distribution such as fixed mean and standard deviation, independence of data, are not valid for real world fractal data sets exhibiting a scale-free power law distribution with fat tails.
And this article showing how the plotting formulas for estimating extreme events are wrong.
http://journals.ametsoc.org/doi/full/10.1175/JAM2349.1
Consequently, the various other methods for determining the plotting positions, suggested during the last 90 years, such as the formulas by Blom, Jenkinson, and Gringorten, the computational methods by Yu and Huang (2001), as well as the modified Gumbel method, are incorrect when applied to estimating return periods.

davidmhoffer
April 17, 2013 10:58 pm

Mark Aurel;
What?
When we are talking about a warming climate what are we talking about if not a rising temperature.
>>>>>>>>>>>>>
CO2 doubling doesn’t change temperature. It changes w/m2. W/m2 in turn changes temperature. But 3.7 w/m2 at -40 C changes temperature by 1.3 degrees. 3.7 w/m2 at +40 C changes temperature by 0.54 degrees.
So when the IPCC says that CO2 doubling = 3.7 w/m2 = +1 degree, I have no idea what they mean, and neither do they.

davidmhoffer
April 17, 2013 11:01 pm

Mark Aurel;
if the temperature is rising at a steady 0.1C yearly then despite the chaotic nature of weather, at the end of the 10 year period the trend line would simply have to be inclined upwards.
>>>>>>>>>>>>>>>>
My point was that an increase in CO2 does not dictate a steady temperature increase. In fact the physics suggest that a steady temperature increase due to an increase in CO2 is unlikely, if not impossible. So testing for something that is unlikely or impossible in the first place and finding that it doesn’t exist proves what?

davidmhoffer
April 17, 2013 11:11 pm
ferd berple
April 17, 2013 11:24 pm

davidmhoffer says:
April 17, 2013 at 8:42 pm
His analogy defines clusters of objects separated by 3 dimensional space. The climate data that he then analyses is a series of data points separated by time which is a linear unidirectional dimension.
=========
An analogy compares one thing to another. He is not comparing climate to planets. He was showing the effect of scale on clustering. Climate data is a 2D projection into the plane formed by temperature and time. Without temperature, climate would be a straight line.
The planets can also be projected in 2D into the orbital plane. However, the measure of clustering is distance which is independent of dimension. The method discovered by Pythagoras allows us to calculate distance between any two points in N-space and returns a scalar.

Rick Bradford
April 17, 2013 11:25 pm

In fact, the Armagh data is available up to the present day, not just 2004.
Start at http://climate.arm.ac.uk/scans/2005/01/summary.html and work forward.

April 18, 2013 12:42 am

Darko Butina to davidmhoffer
If you read carefully what I said is: “because I spent 20 years doing predictive modelling in market driven sector”, NOT predicting markets. In my case as you can see from my brief CV, it was drug discovery sector, where end product is a drug molecule. If model picks 100 molecules to make and you test those 100 molecules in biological screen that was used to build the model in the first place, and you get 80 actives than your prediction rate is 80% or R^0.8. The whole point of our discussion here is that in experimental sciences when you make statement ‘we know’ it means that you understand exactly what the underlying mechanism is and you deal in certainties and not probabilities. We know that drug molecule that I was involved with works because in last 20 years it had drastically improved quality of life for millions of migraine sufferers and we know that we understand principles of combustion engine because we produced millions of them. So if you want to find whether there is correlation between CO2 and temperatures, you don’t calculate but you measure daily concentrations of CO2 at the same place where the thermometer is. And what you will find is that it is not there since and cannot be there since it would violate all gas laws. No gas molecule of the open system, as our atmosphere is, can control temperature – it is the other way around – temperature control behaviour of gas molecules. And how do I know that – because I worked twenty years in carbon-based chemistry, used CO2 in chemical reactions and to perform chemical reaction you have to know everything that is known about molecules that are used in that chemical reaction. All you need to do is to go to the http://www.engineeringtoolbox.com and look for temperatures vs density of gases. Hope this helps

pete
April 18, 2013 12:44 am

Leaving aside the scientific content for a moment, I would like to comment on another issue: the quality of the writing of this post. The grammar and syntax of this is below the standard I normally see on this site. I’m surprised that the author did not ask a friend or find someone who could help with a little copy-editing before publication. It’s one thing for throwaway comments to be a bit rough and ready with the language, but the continuous stream of awkward phrasing and the consistent omission of definite and indefinite articles in the post start to convey an impression of amateurishness and sloppiness, which is unfortunate, because it takes attention away from the content. It’s easy to find copy-editors who would be able to polish up an article suitable for publishing. Many here would probably volunteer their services as a way of helping to add quality contributions to the debate. I’m not trying to be harsh here — I’m just saying that some things can be improved, for everyone’s benefit.

April 18, 2013 1:04 am

DaveA says: April 17, 2013 at 10:05 pm
———————-
Maybe the cyclic variation in solar energy is contained within the range of the temperature record … heating and cooling.

April 18, 2013 1:44 am

Probably not relevant to the discussion here, but I remember the winter of 1947 very well. We were snowed in for 6 weeks, I got 6 weeks off school, my sister went to town to stay with a friend so that she could sit her school leaving exams and was taken there on the train full of 700 soldiers who were digging the snow from the nearby railway line. My Mother was very worried but my Father said that she was completely safe because all the troops would be watching each other, and anyway he had been in the Army with the CO.
Can’t wait for part 3.

johnmarshall
April 18, 2013 1:50 am

Temperature is only accurate for an object if all molecules of that object are at equilibrium. This is possible only at 0K (absolute zero) which is a theoretical temperature because to get all molecules at the same kinetic energy is impossible. To get an average temperature of a planet’s atmosphere is also impossible and meaningless because equilibrium is impossible in a chaotic regime like our atmosphere.
Maximum temperature measured on the surface is over 50C, minimum is -80C a span of 130 degrees. Both temperatures are possible on the same day, the warmest in the NH the coldest the SH. Bothering about less than a degree change is total stupidity.

agwnonsense
April 18, 2013 1:58 am

funny about that

Ryan
April 18, 2013 2:18 am

I feel that Dr Burkina’s approach is only adding ever more complicate statistical approaches to what should be a simple hypothesis to test: Does an increase in atmospheric CO2 result in an increase in temperature?
Sadly, I have not seen a single paper that actually approaches this hypothesis and attempts to test it correctly. What so-called scientists have done is outright fraud, and it is to the shame of the scientific community that they have gotten away with it so long. What Phil Jones and crew have done is simply plot temperatures over time. They have then found a number of years that show a trend that happens to be upward and they have simply stated that that trend is caused by CO2 et voila, we have a measurement of climate sensitivity.
This is nonsense as I will quickly prove. Here is a hypothesis: “It can be shown that your star sign determines the likelihood you will suffer from depression and Greek astrologers were right and horoscopes have validity”. OK, so do we have measurements that can be used to plot likelihood of depression against star-sign? Yes we do. Children born under the star-sign Capricorn show a significantly higher chance of developing depression than those born under the star-sign Leo. Therefore the hypothesis is proved right?
WRONG! Because the start-signs just happen to coincide not with the position of the distant stars in our galaxy but with one particular star: THE SUN! If you get born in Winter you are more likely to suffer depression than if you get born in the Summer, and guess what – that also determines your star-sign!
Similarly the data produced by Phil Jones et al. shows not the trend in temperature related to CO2 but an entirely coincidental trend that happens to have been caused by a very slight decrease in cloud-cover over a 45 year period since WWII. If anybody from the AGW wants to prove me wrong they are welcome to go ahead and actually TEST their original hypothesis instead of looking for random “trends” in limited temperature data and claiming them as their own.

marcus25
April 18, 2013 2:20 am


forget it mate, you are carrying on with your theme regardless of what we ask or say.
He proposed an “Hypothetical scenario” in the context of his paper!
Do you deny that a steady increase in temp will produce a correspondingly rising trend-line?
Nothing to do with CO2 or anything.

April 18, 2013 2:38 am

David Hoffer:
I’m not sure exactly what your beef is about this time. I agree with Ferd in that you normally address details in a discussion and not get hung up on semantics of whether an analogy is proper or not.
Forget the 3D planetary approach.
Forget the CO2 signature approach.
Both of these are suggestions, not absolutes.
I take Dr. Darko Butina’s post as a suggestion to start from scratch in analyzing temperature/weather data and building the metadata necessary to define the data/database parameters.
–A) Analyze the measurement device and define it’s parameters.
——a) For all comparisons over multiple time scales, the worst level of measurement accuracy is the comparison’s accuracy.
————1) This is mandatory for comparisons, averages, anomalies, etc.
–B) After defining measurement of temperature; start with one known record and analyze it.
——a) This doesn’t mean that Dr. Darko Butina’s method is the only analysis method; just that it is one of many approaches.
————-1) Understand just what the relationships are between all records in the database.
————-2) Identify all ranges; natural, unusual, extreme, outliers, etc.
–C) Move on to a second record, third, fourth, and so on.
——-a) Each analysis of a record builds the metadata, defines parameter extremes, accuracy bars.
——-b) Ideally this is where Anthony’s weather station model comes into play as the only way to get data from multiple stations worth analyzing is to ensure their methods, equipment, locations, siting, maintenance are identical.
At the end, what one is most likely to end up with is a realization that all current systems for measuring temperature/weather can not be used for the purposes they are used for. What one can do is make very general statements. (e.g. “It sure is cold today Mikey” “Ayup, it’s cold here too Phil”).
Yeah, it’s great to look at the end of a lengthy statistical mumbo jumbo mega-averaging and ask about error bars… What Dr. Butina is inferring is that we do not even take the error bars of the base data or record into consideration first. GIGO is GIGO whether you’ve processed it a thousand times or looked at the basic system and realized the data is useless for CAGW alarmist intentions before further processing.
This doesn’t make any of your qualms incorrect David. But I understand where Dr. Butina’s method helps define/build the required metadata and what that metadata means for all of the team’s fanciful AGW dodges.

Chuck Nolan
April 18, 2013 2:39 am

Roger Sowell says:
April 17, 2013 at 9:01 pm
Dr. Butina, I have plotted thermometer records for nearly 90 US cities at the link below.
http://sowellslawblog.blogspot.com/2010/02/usa-cities-hadcrut3-temperatures.html?m=0
Roger Sowell
———————————————————–
I liked it until I noticed you hid your results by using different scaling for each graph?
No apples to apples comparison.
cn

peter azlac
April 18, 2013 3:07 am

Bravo Dr Butina for shedding some much needed light into the murky world of “climate science” that requires inventive statistics to make the points required by the IPCC CAGW myth.
Like you, I am a retired scientist (an agricultural scientist) who learned science in the days before desktop computers with statistical programs became available to allow the data mining activities that pervade climate science, in fact at about the same time that Lorenz was coming up with chaos theory. Agricultural science is one of the first areas for which statistics was developed by Fisher among others. At that time, we only had hand turned mechanical calculators so had to be very careful how we designed our experiments if they were ever to be analyzed – this later extended to the use of mainframe computers where time and cost limited usage.
I was struck by your comment:
“So if you want to find whether there is correlation between CO2 and temperatures, you don’t calculate but you measure daily concentrations of CO2 at the same place where the thermometer is. And what you will find is that it is not there since and cannot be there since it would violate all gas laws.”
As a young scientist I first worked on an agricultural research station in Africa where, as a member of the team on a new research station, we made a modest contribution to the “Green Revolution” of that era that disproved the earlier Malthusian alarm of that time claiming that the increasing population would lead to starvation and death in underdeveloped countries – much like the current IPCC alarm. In fact, through the application of science and well designed experiments we increased grain production 20 fold as well as high performance from many other crops. We did so through a combination of selecting crops and matching them to soil types combined with the use of fossil fuels in the form of diesel for cultivations plus fertilizers and agro chemicals.
What we did that is relevant in terms of your pattern recognition program is that we defined our local climates – we had three; lowland, midland and upland with different temperature and rainfall profiles – and then matched them via the Koppen classification with similar areas around the World. The data we used can be seen in the Armagh records – rainfall, sunshine hours and temperature, plus where available Class A Pan Evaporation data. These were taken from agricultural research stations that have such data. The result was that we were able to successfully introduce many crops into the local agriculture in a matter of a couple of years: cotton, tobacco, rice and citrus from the USA, pineapples from Australia, maize from the then Rhodesia, sugar cane from S Africa etc.
One of my roles was to handle the meteorological data and since then I have been convinced that the only relevant way to monitor climate change is by creating similar data to that which we used for all the Koppen climate zones and monitor the rate at which climate boundaries are changing and take appropriate action – temperature alone is not a sufficient metric and especially not a global average. This suggests to me that your algorithm can be used for this purpose if the data used is extended beyond temperature and that in fact in climate studies it should be.
Frank Lasner with his Ruti project is doing work in this direction and whilst his “zones” are not directly linked to the Koppen-Geiger zones he is finding substantial differences that support a more detailed study.

Jessie
April 18, 2013 4:32 am

Thank you Dr. Butina. so well explained and interesting, though I am still working through the maths.
As I understand, you are focussed on specificity.
Monckton focussed on sensitivity.
Not well stated, but… to measure the proposed phenomena the choice/appropriateness of the instrument (specificity) is as important as the instrument to measure accurately (sensitivity).

Jessie
April 18, 2013 4:34 am
KenB
April 18, 2013 4:43 am

Ryan says:
April 18, 2013 at 2:18 am
Gave you a thumbs down with your attempt purely because a leo can be born in either the Summer or Winter depending upon where they are born, unless your world is seasonally indifferent!! (wink)

Stacey
April 18, 2013 5:38 am

Thank you Dr Butina.
So what would be the findings if you adopted the same approach and compared various actual temperatures for the last thirty plus years with the Lower Troposphere Temperatures as discussed in a later post by Dr Spencer?

April 18, 2013 5:53 am

@ Chuck Nolan
“I liked it until I noticed you hid your results by using different scaling for each graph?
No apples to apples comparison.”
I did not hide any results. The graphs’ scaling was to show each graph as clearly as possible.
I was interested only in the slope of the least-squared trend line for each city. That slope is shown in black at the upper right corner of each graph as the equation Y = nnnn X + B, where the slope is “nnnn.”
What the results show is that some cities have zero warming, which agrees with Dr. Butina’s result in this article. Some US cities show a cooling, a negative slope to the trend line.
If CO2 was responsible for warming, it must warm adjacent cities; but, it does not. One can compare many pairs of cities to see this, for example, Shreveport and St. Louis. Shreveport is not warming but St. Louis is warming.

Robany Bigjobz
April 18, 2013 6:52 am

Dr Butina, I find your analysis interesting but there are some aspects I have not yet got a handle on. More thought is required but there are some things that would bolster your argument that your approach is the correct one.
You claim that your method does not detect any hockey stick in the temperature data of Armagh. This point would be greatly strengthened by demonstrating that it does, in fact, detect a hockey stick (or an underlying trend) when such has been placed in the data artificially. By this I mean choosing a yearly change in temperature over time and adding this value to every single measurement in the relevant year e.g. leave all Armagh data 1844 – 1984 alone and add an additional 0.02C/year to each subsequent year’s data. This meets your own criterion of requiring every day to be warmer for a year to be considered warmer. This, unlike your rather unphysical approach in figure 10, would preserve the chaotic nature of the day to day and year to year variations but provide a defined, albeit synthetic, warming signal. If, and only if, your method detects and identifies this synthetic signal can you realistically claim that the lack of detection in the raw data means lack of signal.

Chuck Nolan
April 18, 2013 7:22 am

Roger Sowell says:
April 18, 2013 at 5:53 am
@ Chuck Nolan
——————————–
No negative critique intended Roger, the graphs are great. I was just looking for visual aides to demonstrate the alarm is overblown and I think your graphs are the right idea.
My point was the graphs use different scaling but to get a quick layman’s eyeball view I would need the temperature readings and time line of each graph to match.
Otherwise, I understand your motive and I agree with you.
cn

Greg House
April 18, 2013 7:25 am

davidmhoffer says (April 17, 2013 at 8:42 pm): “But at night, when incoming insolation falls to zero, a cloud keeps earth surface warmer than it would otherwise be.”
=======================================================
No, such a process does not exist. Back radiation from clouds can not affect the temperature of the source which is the Earth surface. This is physically impossible.

rilfeld
April 18, 2013 7:50 am

There is a second, potentially different set of data points that might graph in an interesting fashion: For a given geographical location, the latest frost in the spring and earliest frost in the fall. The derivation of days between might be labelled “growing season”. I believe this data is contained in the raw temperature set daily minimum. The interval is neither imprecise nor an average. My guess is no hockey sticks, as the crop yield changes for crops grown on the margin would be obvious, the the yields and commodity price curves would be unlikely to escape attention as both are watched closely.

Greg House
April 18, 2013 8:01 am

Darko,
The point in your article about “global warming” being within the uncertainty of the thermometer measurements is very good and actually sufficient to debunk the whole thing. The other point about switch-overs between years is not quite relevant. You have actually demonstrated that in purely physical sense one can not say “a warmer year”, but the same goes for “warmer days” too, this is obvious, therefore there was no need to make all the comparisons between years.
Another point was about switch-overs. It was irrelevant either, because the switch-overs do not matter. Let me give you a simple example to illustrate that. Imagine a group of 365 people and you give them apples every day for many years, each year 1 apple more altogether. Regardless how the apples are distributed between them and how many switch-overs there are between years, they get together 1 apple more each year which is equal 1/365 apple per year more on average. This is what the “global warming” is about. So, the “global warming” thing is wrong, but for other reasons and not because of switch-overs or because of ambiguity of the term.

P Wilson
April 18, 2013 8:08 am

Yes. I am terrified that I will frazzle if the temperature here in the UK reaches 9.2C in March. Mind you, this march just gone was 7.1C, well below the average of 8.5, and the year before, a whopping 10.4C average – 3.4C above the 1981-2010 average.
In either case, in the space of a few years these so called nominal anomalies are greater than 0.7C, though the favourable years was when it was at least above this hypothetical (alleged/nominal) 0.7C, for crops, growing, and all sorts of other beneficial factors

Greg House
April 18, 2013 8:09 am

davidmhoffer says (April 17, 2013 at 10:58 pm): “CO2 doubling doesn’t change temperature. It changes w/m2. W/m2 in turn changes temperature.”
==============================================================
LOL. It is like saying “Mr.Smith did not killed Mrs.Smith. He just pulled the trigger and then the bullet killed Mrs.Smith. We plea not guilty, your honor.”

Greg House
April 18, 2013 8:29 am

darkobutina says (April 18, 2013 at 12:42 am): “So if you want to find whether there is correlation between CO2 and temperatures, you don’t calculate but you measure daily concentrations of CO2 at the same place where the thermometer is. And what you will find is that it is not there since and cannot be there since it would violate all gas laws.”
==============================================================
Well, this is very wrong, I am sorry. There is such a thing like wind, it can bring warmer or colder air from other places, so you can not expect such a correlation, even if there was a warming effect of CO2.
I do not see a violation of gas laws either. Gases have different thermal capacities, so you can get different temperatures depending on the gas mixture. The real warming effect of CO2 might be like 0.0001C, that is all, and it is not what the IPCC calls “greenhouse effect”. They mean that CO2 warms the surface by returning some “back radiation”. Such a warming is physically impossible. But this has nothing to do with correlations.

April 18, 2013 8:52 am

From Darko Butina
My website darkobutina@l4patterns.com is now live and the paper discussed here, plus my clustering algorithm are available in PDF format for free viewing.

Robany Bigjobz
April 18, 2013 8:56 am

Something else to consider:
Imagine a year with a constant base temperature of 10C (arbitrary) with a superimposed sinusoidal variation of amplitude 4C and period 2 weeks. Now imagine the following year has a constant base temperature of 12C (bigger than the thermometer’s accuracy) with the same sinusoidal variation. We can all agree that this second year is hotter than the first.
Finally imagine a third year with a constant base temperature of 12C but this time put a 90 degree phase shift onto the sinusoid to make it a cosine before superimposing it. The third year has days that are hotter than the same day in the first and some days that are cooler. By your definition, Dr Butina, the third year is not hotter than the first but the second year is. However, the total heat content of the air in the second and third years is equal but only one is considered hotter according to you. It would seem to me that your definition of “hotter” is highly phase sensitive and thus possibly invalid. I think most people would agree that year 3 was hotter than year 1. With sunlight, precipitation, etc, held constant, I would suggest most temperate-region vegetation would do grow the same amount in years 2 and 3 while growing more than in year 1.
This is not to say that your pattern recognition methods are invalid or to defend blind use of statistics. However your complete dismissal of any form of aggregation of data, whether by means or otherwise, puts you at odds with many people’s instinctive concept of hotter and real world observations such as plant growth.

Darko Butina
April 18, 2013 9:27 am

Apologies, website is http://www.l4patterns.com and contact email darkobutina@l4patterns.com

davidmhoffer
April 18, 2013 9:31 am

Darko;
Thanks for clarifying what you meant. However, you are still applying a technique that was applicable to your purposes in chemistry/biology to a physics problem where it is not.

davidmhoffer
April 18, 2013 1:36 pm

Robany Bigjobz says:
April 18, 2013 at 8:56 am
>>>>>>>>>>>>>>>
Well articulated.

Chuck Nolan
April 18, 2013 3:15 pm

Rhoda R says:
April 17, 2013 at 1:47 pm
I have to agree AC. One of the recurring problems has been the naive approach to data.
——————————–
One of the problems is the failure to hold data sacrosanct. Instead it is treated as a good starting point then changed as deemed necessary to match the model’s errors.
cn

Ryan
April 19, 2013 2:05 am

@KenB: “Gave you a thumbs down with your attempt purely because a leo can be born in either the Summer or Winter depending upon where they are born, unless your world is seasonally indifferent!! (wink)”
Well that’s fair enough, but since the Greeks weren’t aware of differences in the timing of the seasons south of the Equator (or indeed the fact that the Southern Hemisphere sees entirely different constellations) can we say that the star-signs are specific to the Northern Hemisphere only?
Which begs an interesting question that has never occured to me before, do Kiwis and Australians have any faith in Northern Hemisphere astrology at all? Maybe they are waiting for a southern Hemisphere version to be released…..

Mark Aurel
April 19, 2013 5:58 am

Which begs an interesting question that has never occured to me before, do Kiwis and Australians have any faith in Northern Hemisphere astrology at all? Maybe they are waiting for a southern Hemisphere version to be released…..

Silly question.
Of course we do.
What makes you think the southern hemisphere people are less gullible than the rest?

Gail Combs
April 19, 2013 6:08 am

I want to thank Dr. Darko Butina. He has used an elegant mathematical proof to summarize the objections I have had with way the temperature data was handled by the Climate Claque.
He even explained it well enough that I had no problem understanding what he is doing, even though my math and statistics are a bit rusty. Therefore I find it surprising that others do not see what he is saying. He has simply asked the DATA the question.
“Are the temperature readings from the latest few years different from the rest of the years?”
The RAW DATA has not only answered NO! it has answered NO! by two different methods–both based on sound mathematical principles.
I also want to also thank Dr. Darko Butina for doing the research that allowed him to say:

” BTW, I am not sure whether anyone has realised that not only a paper that analyse thermometer data has not been written by AGW community, but also not a single paper has been written that validates conversion of Graph 1 to Graph 2 – NOT A SINGLE PAPER! I have quite good idea, actually I am certain why that is the case but will let reader make his/her mind about that most unusual approach to inventing new proxy-thermometer without bothering to explain to wider scientific community validity of the whole process.”

KenB
April 19, 2013 12:12 pm

Ryan says:
April 19, 2013 at 2:05 am
and Mark
Had a smile, the planet earth exists in space with only the perceptions of man nominating and naming hemispheres to order their thinking of what is North or South or up and downunder!
Thanks Dr Darko for sparking a thinking revolution…..

steverichards1984
April 19, 2013 12:50 pm

davidmhoffer says: what about the CO2!
I don’t think Dr. Darko Butina was talking about CO2 at all.
He say: look at the temperature data, the actual readings, forget the rest.
If you can not effectively analyse temperature records from one station then how can you move on to analysing any possible link between temperature and other possibilities?
No one mentions CO2…..
Well done Dr. Darko Butina.

April 20, 2013 4:29 am

My final comments on this topic, sparked by Ryan’s comments: “Thanks Dr Darko for sparking a thinking revolution” and steverichards1984’s: “look at the temperature data, the actual readings”. The main reason that I have accepted Anthony’s offer to introduce my paper was not to seek recognition from scientist, armature scientists or those simply interested in science, but to start to repair damage that the man-made movement has done to science. The worst crime against the science is when things that we either don’t know or will never know are presented as we know and as a consequence we corrupt our knowledge database. We now teach our children at schools and young scientist at Universities that we know everything about air temperatures and CO2, that the Earth’s temperature can be described by a single number and that we can actually do something to change the Earth’s climate, while the opposite is true – we do not understand temperatures of the past, we do not understand temperatures of the present and therefore, we cannot predict temperatures of the future. Since the single and the most important duty of any scientist is search for scientific truth I had a duty to use my own knowledge and expertise to separate facts from fiction when it comes to temperatures and ask questions that should have been asked 25 years ago, back in 1988 when Hansen has announced the arrival of global warming to US politicians and world media. And my single question was – why did it take 25 years for a retired scientist from outside climate community to be a first person to look for supposedly unequivocally alarming trend in the most obvious place – the thermometer data, the same data that has been used to generate annual average. What has happened to the scepticism that is part of any scientific field outside climate sciences? I don’t know what the scientific background of Ryan and Steve Richards have, but I know that my paper was success since those two comments summarise what the experimental sciences are all about – be sceptical, use your brain, or what is better known as sanity check, and look at the data. Data is the only source of knowledge. Here is a hint how to spot real science from ‘silly-numbers’ science: If one set of instrumental data is correlated to another set of instrumental data, then that is worth reading since it reflects the data and not the philosophy of the author. However, if the paper is comparing one model against another model, and neither of the two models could be validated by an experiment or measurement, then that paper is a vehicle for glorification of author’s reasoning skills and that has nothing to do with experimental sciences. So be sceptic, use your brain and try to understand what the measured data is trying to tell us.