
Dr. Roger Pielke Sr. draws attention today to a new study that cites Fall et al. 2011 aka the “Surfacestations Paper” that I co-authored, which was a follow up to my original surveys published in Watts 2009. The new paper is:
Martinez, C.J., Maleski, J.J., Miller, M.F, 2012: Trends in precipitation and temperature in Florida, USA. Journal of Hydrology. volume 452-453, issue , year 2012, pp. 259 – 281
They took a look at USHCN stations in Florida, and found some problems, such as trend aberrations introduced in the conversion from Cotton Region Shelters to MMTS starting in the 1980’s that aren’t fully removed by the Menne et al USHCN v2 adjustments.
Dr. Pielke writes:
they conclude in their paper
This work provides a preliminary analysis of historical trends in the climate record in the state of Florida. While this work did not attempt to fully attribute the cause of observed trends, it provides a first step in future attribution to possible causes including multidecadal climate variability, long term regional temperature trends, and potential errors caused by station siting, regional land use/land cover, and data homogenization.
We need more such detailed analyses, in order to further examine the multitude of issues with the USHCN and GHCN analyses of long term temperature and precipitation trends. Despite what is written on the NCDC website for the USHCN website; i.e. that
The U.S. Historical Climatology Network (USHCN, Karl et al. 1990) is a high-quality moderate sized data set of monthly averaged maximum, minimum, and mean temperature and total monthly precipitation developed to assist in the detection of regional climate change.
they are really not as of as high a quality as claimed.
Entropic Man:
Although others have stated this, let me say it in different words:
The ‘warmist’ position is that there is an increase in temperatures during the period covered by the record (with the probable cause being increased atmospheric CO2, or at the very least primarily due to human activity).
The ‘skeptic’ position is that the record is contaminated, perhaps too much to ever recover any useful information. The only ‘human activity’ affecting the temperature record is local land use, A/C units, jet wash, paving, BBQs, and other things directly affecting the area immediately surrounding thermometers.
I’ve yet to hear anyone refer to skeptics (or myself) as a ‘coolist’, and that would be highly inaccurate. I don’t think either warming or cooling are happening other than natural cycles which are fairly easy to see in the record. None of these self-described ‘climate scientists’ have ever suggested a plausible cause for the LIA or MWP, they seem to be too busy pretending they didn’t exist.
As with others here, I’d believe there is warming (or warming worth worrying about) if anyone would demonstrate to me that data and methodologies are sound. I see no such evidence. Then again, I live in a winter climate and can see absolutely zero downside possible if there was warming. I’d much prefer if our coldest winter lows were -35C instead of -42C.
Gail Combs says:
July 28, 2012 at 3:42 pm
“From my point of view the data is being treated as if it came out of a precision analytical lab with calibrated equipment and college educated lab techs when it is actually coming from field measurements using also ran equipment and who know what type of operators.”
Mann, Hansen and Watts have all come through a meteorology training system based on physics. They would tend to look at climate as a series of physical processes scaled up from laboratory conditions to the atmosphere.
Perhaps some crosstalk might be useful.
A plant can be studied in the lab and its respiration, growth, energy budget, etc, measured accurately.
Put that plant back in the wild and its lab perfomance is modified by many factors in its environment. As hundreds of species interact, this level of study suffers the same complexity problem as station analysis and is just as hard to get useful data from.
Go up a level again, to the level of a whole ecosystem and it gets a lot easier as the mid-level complexity averages out in the overall operation of the system. It becomes much easier to measure the performance of a whole forest, than the individual behaviour of each tree.
Watts et al advocate using the raw climate data for the same reason, the +ve distortions due towarmed urban stations would be balanced by the -ve distortion of frost hollow stations. This would make the large scale averaged data for a country more reliable than the individual station records.
The LIA correalates strongly with the Maunder Minimum, a period of sustained low sunspot activity.
http://www.solarstorms.org/SunLikeStars.html
The MWP is harder to pin down, especially since it shows much more in the European data than worldwide.
I’ve included three links to give you a flavour of the debate.
http://www.newscientist.com/article/dn16892-natural-mechanism-for-medieval-warming-discovered.html
http://www.skepticalscience.com/medieval-warm-period.htm
http://wattsupwiththat.com/2009/11/29/the-medieval-warm-period-a-global-phenonmena-unprecedented-warming-or-unprecedented-data-manipulation/
On the basis of the discussion here, suggesting that decreasing efficiency and increasing use would increase the net heat output of air conditioners with time, I suggst an updated basic shape for the yearly average graph for a station in the decades before and after an air conditioner is installed alongside it. For simplicity I assume that no other local changes affect the station, and , considering the sensitivities of those here, no global warming trend.
Before the air conditioner is installed the graph would be flat. In the year that the air conditioner is installed, the extra heat would cause an increase in thermometer readings averaging, say, 1C. This would show as a 1C jump in the average temperature from the previous year.
In subsequent years the graph would be flat until changes in use and deteriorating efficiency progressively increased heat output. The graph would then gradually steepen.
You now have a tool for measuring long term climate temperature trends, independant of larger averages. Look throught Mr. Watts’ station website for stations that come near my simplified model and inspect the yearly average graphs. If no climate change is taking place, they will match my description.
If there is a global warming trend there will be an underlying slope of a size proportional to the rate of change.
Over to you.
I would do it myself, but you would doubt the outcome as coming from a known warmist. You will have more confidence in the result if you do it yourselves.
If global waming is happening
Incidentally, a quick trawl through the station website while writing my previous post showed a mumber of stations with +ve distortion, but I cannot recall offhand seeing any stations which were underreading due to shading, frost hollows, etc.
Could anyone give me a few examples of -ve distortion stations on the site, to reassure me that the voluntary contribution method used to collect the data is not overcounting +ve stations and undercounting -ve ones.
Moderator, how about an upgrade to the posting system to allow previewing of posts. I am finding it difficult to produce error-free copy using the system as is.
[REPLY: WordPress does not offer that feature. The is something called CA Assistant which I don’t use but a number of our commenters do, or you can do what I do: write your comment in Word and then cut and paste into the comment box. This method proves especially useful when wordpress really screws up and swallows a comment, or a moderator screws up and hits “delete” when he really mneant “approve”. -REP]
Entropic man says:
July 29, 2012 at 6:47 am
On the basis of the discussion here…
_____________________________
You are still missing the fact that temperatures were reported as whole degrees in the past. This means any reports of less than one degree are artifacts of the calculations.
If the data resolution is not there you can’t stuff it back in using “Averaging” That only works if you are doing repeat measurements of the same exact thing like measuring the length of a board.
What is done with temperature is the same as having 30 hundred cavity machines making widgets. Measuring a widget from each machine and taking the average does not give me more measuring accuracy like measuring the same widget 30 times would. The two types of measurements are apples and oranges. Unfortunately they are considered the same by climate scientists.
The fallacy of reporting numbers to tenths and hundredths just because the computer spits out a couple extra decimal places is another reason skeptics think the data stinks. Seems significant figures are no longer taught in school any more much less statistics.
Well you always have to erase the hottest temperature ever recorded on a whole Continent as not ‘official’ it seems, but the problem is when you decide to become ‘official it’s a thorny problem to hide the coldest ever reading on the Continent-
http://www.australian-information-stories.com/weather-facts.html
Although I suspect the indigenous aborigines might have a good laugh at such whitefella presumptuosness which simply oozes irony since those same whitefella Climate Catastrophists will lecture you with their next breath about how important such aboriginal oral history is in the big scheme of things. Their hypocrisy and anti-science knows no bounds.
Although some indigenous folk are not so much laughing at their presumptuousnes and hypocrisy, as wincing it seems. You’ll get the gist of it here-
http://theblacksteamtrain.blogspot.com.au/
Entropic man says:
On the basis of the discussion here, suggesting that decreasing efficiency and increasing use would increase the net heat output of air conditioners with time, I suggst an updated basic shape for the yearly average graph for a station in the decades before and after an air conditioner is installed alongside it.
Your ‘suggestion’ has no basis in reality. It is fabricated from the whole cloth of your preconceived conclusions and biases.
As such, it comports quite well with ‘climate science’ practices. Tie a ribbon on it, and you can probably get it published.
JJ says:
July 29, 2012 at 8:55 am
“Your ‘suggestion’ has no basis in reality. It is fabricated from the whole cloth of your preconceived conclusions and biases.”
I am disappointed to find that your approach is so negative. I have proposed a hypothetical solution to a problem which Mr Watts will also have considered, avoiding anything controversial. I get rudeness in return.
To demonstrate that you are capable of more than tobacco lobby tactics, perhaps you would like to suggest how the graph of my simplified interaction between a station and an air conditioner should look.
Gail Combs says:
July 29, 2012 at 7:50 am
“Entropic man says:
July 29, 2012 at 6:47 am
On the basis of the discussion here…”
My hypothesis is not temperature critical.
You could try the same thought experiment. assuming a 3C rise when the air conditioner arrived, which even the old thermometers would detect.
Alternatively, you could instead consider the effect of a new installation alongside an MMTS temperature sensor, which would record to greater sensitivity.
I’m not too surprised by BEST’s results. I’ve looked over a lot of historical data available at NCDC and elsewhere, and it’s pretty obvious that it’s warmed significantly since the 1800s. Before 1850, there were only scattered observations conducted by the U.S. Army Signal Corps, but they definitely support a cooler climate. This doesn’t mean the change is entirely attributable to man, but I suspect there is some correlation.
Still, global warming will NOT be a catastrophe and, in fact, is more apt to be a boon to civilization. According to BEST, it’s already warmed 1.5C since 1800. Far from a catastrophe, this has corresponded with the growth of civilization! So why should we fear another 1 or 2 degree rise over the next 50+ years? It’s the CAGW-ers we should be focusing our efforts, NOT someone like Dr. Muller, who is making an reasonable, good-faith effort. As Dr. Muller correctly points out, while there has been some increase in temperature, there is NO evidence of the catastrophic effects the CAGW crowd goes on about.
Entropic man says:
July 29, 2012 at 10:31 am
…tobacco lobby tactic…
The only people who are using that tactic are those who want everyone else to believe in the CAGW scare. Did you never hear about the all of the data adjustments they have made? The false press releases they have made? Sounds a lot like the tobacco lobby to me. But maybe you just believe that all of us sceptics are in the pay of “Big Oil”, “Big Energy” or even “Big Conspiracy”. Why do you not look at the evidence? Or are you in the pay of “Big Green”?
BTW I looked at all of the 3 links you provided. You do realize that scepticalscience is anything but. They delete views/ comments dissenting from their orthodoxy. So why should I trust their version of events?
And Newscientist? You are kidding right? you mean newactivist. C’mon Entropic, show us some spine and spirit.
Entropic man says:
I am disappointed to find that your approach is so negative.
I am disappointed, but not at all surprised, that your approach is so willfully obstuse.
I have proposed a hypothetical solution to a problem which Mr Watts will also have considered, avoiding anything controversial. I get rudeness in return.
Your hypothetical solution has no basis in reality. It is wholey a product of your imagination. You got an honest assessment of that. What is required in this circumstance is science, not people making shit up. We have way too much of that already.
To demonstrate that you are capable of more than tobacco lobby tactics, perhaps you would like to suggest how the graph of my simplified interaction between a station and an air conditioner should look.
You are demanding an imaginary answer to an irrelevant question. Imaginary simplifications of real world circumstances that are far from simple are the problem here, not the solution.
A monitoring system has to discern between (at minimum) three intermingled quantities: the signal , the variability, and the contamination. In order to deduce one of the three, you have to know the other two. You can’t fabricate the other two, you have to know what they are. You are attempting to fabricate a contamination profile.
“Say 1C”? Why say 1C? Why not say 1.5C? Or say 0.1C? Or say 0.01C? You assume a step change, despite having been provided information that demonstrates the failure of that assumption.
Then you make up an operation profile. Flat after the start? Why? You made that up.
Increasing at some point later. When? You make that up. At what rate? You make that up.
When you don’t have the data necessary to answer the question, the proper response is to collect proper data, not to make stuff up. If you can’t get the proper data, then you admit that you cannot answer the question, you dont make shit up.
I speak under correction, of course, but I thought even measuring the length of a board several times won’t give you more significant digits. If your tape measure is marked in centimeters, won’t your error still be +/- 0.5 cm no matter how many times you measure? You can get more accurate, and be sure that the length is 35 cm +/- 0.5 cm and not 36 or 37 cm, but you can’t get to +/- 0.05 cm unless you switch to a tape marked in mm. Is that not right?
James Schrumpf (@ShroomKeppie) says:
You can get more accurate, and be sure that the length is 35 cm +/- 0.5 cm and not 36 or 37 cm, but you can’t get to +/- 0.05 cm unless you switch to a tape marked in mm. Is that not right?
That is correct, for the measurement of a single board. If what you are interested in is not the measurement of a single board, but the average of 1,000 boards, then there is the possibility that you can do better. If the remainders of the measurements (those +/- 0.5 cm bits) are evenly distributed between + and -, then they effectively cancel out over large numbers of measurments.
The question is, are the remainders evenly distributed between + and -? To know, you’d have to have more accurate measurments. If you had those more accurate measurements, you wouldn’t even be asking this question. In order to take advantage of this benefit of large N, you have to assume that the remainders average out, and you have to be correct in that assumption .
Sometimes, such assumptions are well supported. Sometimes, they aren’t. All to often, such assumptions aren’t even recognized, let lone supported.
What process produced these boards? Does it have any bias that could skew the distribution of the remainders? How do you know? Same with temps.
Roy UK says:
July 29, 2012 at 11:17 am
“I looked at all of the 3 links you provided. You do realize that scepticalscience is anything but. They delete views/ comments dissenting from their orthodoxy. So why should I trust their version of events?”
And Newscientist? You are kidding right? you mean newactivist. C’mon Entropic, show us some spine and spirit.”
Most people regard links supporting their view as good and links that disagree as bad. If I put the same list of links on a pro-cAGW site the more vocal posters would describe wattsupwitthat in the same terms you use for scepticalscience. Can you suggest neutral sources a warmist and a sceptic can both agree on?
As for deletions, I have had several posts deleted here, on a topic the moderaters have asked me not to discuss at present.
Spine and spirit? I am here , a warmist among sceptics, trying to debate the science with those few people interested in doing so and getting insults from the rest.
In that spirit, I had best ignore the first part of your post, couched in a style which a juornalist recently described as “dogmatic garbage”
How would that work? Assume there are a thousand volunteers with standardized tape measures marked in cm. out there, and every day they get a new board to measure. Every day at noon they measure their daily board with their identical Craftsman tape measures, and then call in their measurements in centimeters with +/- 0.5 cm error bars.
What would the statistical process used be if one wanted to know the average monthly length of the boards to a precision of +/- 0.05 cm? Stats was a very weak spot for me in math, and I can’t even claim total understanding of calculus. But it seems to be to be a “trick” of some sort to presume you could do some kind of statistical analysis and then say that the average board length was actually 35.3 cm +/- 0.05 cm.
If a brief explanation is possible, I’d love to hear it, but I don’t want to exacerbate anyone’s incipient carpal tunnel syndrome!
James Schrumpf (@ShroomKeppie) says:
How would that work?
It works if you have a large number of measurements and if the errors are distributed with a mean of 0.
What would the statistical process used be if one wanted to know the average monthly length of the boards to a precision of +/- 0.05 cm?
The process is no more complicated than adding up the measurements and dividing by the number of measurements. There is no special statictical processing of the data that gives you higher precision. The precision is determined by the number of measurements and the distribution of the errors, i.e. the precision is determined by the data, not by the processing.
The stats trick is not in achieving the precision, but in measuring that precision. The precision is what it is, but how can you know what it is?
Basically, it is a probability calculation, driven by the number of measurements and the distribution of the errors. Consider the simple case where the error for an individual measurement is either -0.5cm or +0.5 cm. This distribution has an average of 0, so it is a bit like flipping a fair coin – for a small number of flips, you might get alot more heads than tails. For a small number of measurements, you might get a bunch of +0.5cm erorrs and very few -0.5cm errors. That would tend to drive the error of the average toward 0.5. Over a very large number of flips, the probability that you are going to stack up way more heads than tails is very very small – its going to drive towards even. Similarly, over a very large number of measurments, the probability that you are going to get way more +0.5cm errors vs -0.5cm errors is very very small – its going to drive the error of the average towards 0.
Just as you can calc the odds of getting X number of heads out of Y number of flips of a fair coin, you can also calc the odds of getting an average error as large as (insert arbitrary precison goal here) – provided that you know the probability distribution for the errors. And there is the rub. What if the errors are a bit like an unfair coin, if they don’t average to zero? Or what if the possible – error is smaller than the possible + error? Or both? In those cases, the precision you calc under the wrong assumptions will also be wrong.
How well do we know the probability distribution of the errors of 30,000 thermometers over 150 years of measurments?
So essentially, what is happening is that if a thousand people look at their cm-marked tape measures and measure their board at somewhere between 35 and 36 cm, and they estimate 35.3 cm +/- 0.5 cm, we can assume that the +/- errors average out to zero and that the 35.3 measurement is good precision? Sounds like you’d also have to hope the human reading errors average out to zero as well as the tape measure errors.
Sounds like a lot of hoping, and it doesn’t sound justified when everyone is measuring a different length board.
@entropic man,
No, we’re not going to do your research for you. If you believe there are -Ve sites equal to +Ve sites, that is up to you to research, after all, it is your theory. I think surface stations documented 85%+ of the stations, so you don’t even have to do that part.
Good luck, let us know what you find.