The first press release announcement thread is getting big and unwieldy, and some commenters can’t finish loading the thread, so I’m providing this one with some updates.
1. Thanks to everyone who has provided widespread review of our draft paper. There have been hundreds of suggestions and corrections, and for that I am very grateful. That’s exactly what we hoped for, and can only make the paper better.
Edits are being made based on many of those suggestions. I’ll post up a revised draft in the next day.
2. Some valid criticisms have been made related to the issue of the TOBS data. This is a preliminary set of data, with corrections added for the “Time of Observation” which can in some cases result in double max-min readings being counted if not corrected for. It makes up a significant portion of adjustments prior to homogenization adjustments as seen below in this older USHCN1 graphic. TOBS is the black dotted line.
TOBS is a controversial adjustment. Proponents of the TOBS adjustment (Created by NCDC director Tom Karl) say that it is a necessary adjustment that fixes a known problem, others suggest that it is an overkill adjustment, that solves small problems but creates an even larger one. For example, from a recent post on Lucia’s by Zeke Hausfather, you can see how much adjustments go into the final product.
The question is: are these valid adjustments? Zeke seems to think so, but others do not. Personally I think TOBS is a sledgehammer used to pound in a tack. This looks like a good time to settle the question once and for all.
Steve McIntyre is working through the TOBS entanglement with the station siting issue, saying “There is a confounding interaction with TOBS that needs to be allowed for…”, which is what Judith Curry might describe as a “wicked problem”. Steve has an older post on it here which can be a primer for learning about it.
The TOBS issue is one that may or may not make a difference in the final outcome of the Watts et al 2012 draft paper and it’s conclusions, but we asked for input, and that was one of the issues that stood out as a valid concern. We have to work through it to find out for sure. Dr. John Christy dealt with TOBS issues in his paper covered on WUWT: Christy on irrigation and regional temperature effects
Irrigation most likely to blame for Central California warming
A two-year study of San Joaquin Valley nights found that summer nighttime low temperatures in six counties of California’s Central Valley climbed about 5.5 degrees Fahrenheit (approximately 3.0 C) between 1910 and 2003. The study’s results will be published in the “Journal of Climate.”
Most interestingly, John Christy tells me that he had quite a time with having to “de-bias” data for his study, requiring looking at original observer reports and hand keying in data.
We have some other ideas. And of course new ideas on the TOBS issue are welcome too.
In other news, Dr. John Christy will be presenting at the Senate EPW hearing tomorrow, for which we hope to provide a live feed. Word is that Dr. Richard Muller will not be presenting.
Again, my thanks to everyone for all the ideas, help, and support!
=============================================================
UPDATE: elevated from a comment I made on the thread – Anthony
Why I don’t think much of TOBS adjustments
Nick Stoke’s explanation follows the official explanation, but from my travels to COOP stations, I met a lot of volunteers who mentioned that with the advent of MMTS, which has a memory, they tended not to worry much about the reading time as being at the station at a specific time every day was often inconvenient.. With the advent of the successor display to the MMTS unit, the LCD display based Nimbus, which has memory for up to 35 days (see spec sheet here http://www.srh.noaa.gov/srh/dad/coop/nimbus-spec.pdf) they stopped worrying about daily readings and simply filled them in at the end of the month by stepping through the display.
From the manual http://www.srh.noaa.gov/srh/dad/coop/nimbusmanual.pdf
Daily maximum and minimum temperatures:
· Memory switch and [Max/Min Recall] button give daily
highs and lows and their times
The Nimbus thermometer remembers the highs and lows for
the last 35 days and also records the times they occurred. This
information is retrieved sequentially day by day. The reading
of the 35 daily max/min values and the times of occurrence (as
opposed to the “global” max/min) are initiated by moving the
[Memory] switch to the left [On].
So, people being people, rather than being tied to the device, they tend to do it at their leisure if given the opportunity. One fellow told me (who had a Winneabago parked in is driveway) when I asked if he traveled much, he said he “travels a lot more now”. He had both the CRS and MMTS/Nimbus in his back yard. He said he traveled more now thanks to the memory on the Nimbus unit. I asked what he did before that, when all he had was the CRS and he said that “I’d get the temperatures out of the newspaper for each day”.
Granted, not all COOP volunteers were like this, and some were pretty tight lipped. Many were dedicated to the job. But human nature being what it is, what would you rather do? Stay at home and wait for temperature readings or take the car/Winnebago and visit the grand-kids? Who needs the MMTS ball and chain now that it has a memory?
I also noticed many observers now with consumer grade weather stations, with indoor readouts. A few of them put the weather station sensors on the CRS or very near it. Why go out in the rain/cold/snow to read the mercury thermometer when the memory of the weather station can do it for you.
My point is that actual times of observation may very well be all over the map. There’s no incentive for the COOP observer to do it at exactly the same time every day when they can just as easily do it however they want. They aren’t paid, and often don’t get any support from the local NWS office for months or years at a time. One woman begged me to talk to the local NWS office to see about getting a new thermometer mount for her max/min thermometer, since it wouldn’t lock into position properly and often would screw up the daily readings when it spun loose and reset the little iron pegs in the capillary tube.
Some local NWS personnel I talked to called the MMTS the “Mickey Mouse Temperature System” obviously a term of derision. Wonder why?
So my point in all this is that NWS/NOAA/NCDC is getting exactly what they paid for. And my view of the network is that it is filled with such randomness.
Nick Stokes and people like him who preach to us from on high, never leaving their government office to actually get out and talk to people doing the measurements, seem to think the algorithms devised and implemented from behind a desk overcome human urges to sleep in, visit the grand-kids, go out to dinner and get the reading later, or take a trip.
Reality is far different. I didn’t record these things on my survey forms when I did many of the surveys in 2007/2008/2009 because I didn’t want to embarrass observers. We already had NOAA going behind me and closing stations that were obscenely sited that appeared on WUWT, and the NCDC had already shut down the MMS database once citing “privacy concerns” which I ripped them a new one on when I pointed out they published pictures of observers at their homes standing in front of their stations, with their names on it. For example: http://www.nws.noaa.gov/om/coop/newsletters/07may-coop.pdf
So I think the USHCN network is a mess, and TOBS adjustments are a big hammer that misses the mark based on human behavior for filling out forms and times they can’t predict. There’s no “enforcer” that will show up from NOAA/NWS if you fudge the form. None of these people at NCDC get out in the field, but prefer to create algorithms from behind the desk. My view is that you can’t model reality if you don’t experience it, and they have no hands on experience nor clue in my view.
More to come…

![USHCN-adjustments[1]](http://wattsupwiththat.files.wordpress.com/2012/06/ushcn-adjustments1.png?resize=640%2C465&quality=75)
Let’s be honest, TOBS is a complete sideshow that may well be unresolvable. However, the basic premise of of Watts et ? (2012) is wrong-headed. Why? Well, let me make this as simple as I can, with a two key paragraphs from the Skeptical Science critique:
1. Quite simply, the data are homogenised for a reason. Watts et al. are making the case that the raw data are a ‘ground truth’ against which the homogenisations should be judged. Not only is this unsupported in the literature, the results in this paper do nothing to demonstrate that. It is simply wrong to assume that all the trends in raw data are correct, or that differences between raw and adjusted data are solely due to urban heat influences. However, these wrong assumptions are the basis of the Watts conclusion regarding the ‘spurious doubling’ of the warming trend.
2. The… final conclusion that adjusted temperature trends are ‘spuriously doubled’ (0.155°C vs. 0.309°C per decade raw vs. adjusted data) relies on a simple assumption — that the raw data must be correct and the homogenised data incorrect. There is no a priori basis for this assumption and it is unsupported by the literature.
Martin Lack says:
August 3, 2012 at 3:23 am (Edit)
————————————————————-
I am sorry Martin but you have seriously mischaracterised what the paper is saying. Read it again, and again perhaps, and you will see. It is not about UHI it is about the incorrect, distorting, application of the mathematical treatment of different classes of reporting stations. Simply put the adjustments are applied so as to produce a spurious warming signal.
As for the statement regarding homogenisation , well that is just silly. If you don’t work from the raw and original data how can you discover whether the subsequent adjustments are valid?
Sorry Keith. If the warming signal were spurious, Watts et al would have eliminated it (rather than merely halving it). To believe otherwise is to admit you think yourself equivalent to Galileo; which you are not.
“Sorry Keith. If the warming signal were spurious, Watts et al would have eliminated it (rather than merely halving it). To believe otherwise is to admit you think yourself equivalent to Galileo; which you are not.”
You can mention warming, but you can’t just tag the word signal to it – Signal of what for a start?
You’d better not be thinking of CO2, so far your lot have not found a discernible signal of any description and you’ve no evidence outside of the models and some very dodgy interpretations of the paleo-records that the planet has a high sensitivity to C02 in the first place.
As to your opinions about raw data and what you can and can’t do it. I’m almost lost for words – I’d really recommend you not reading SkS anymore.
James Humbolt says:
August 1, 2012 at 8:51 pm
So once again, the MO here appears to be that if you agree with the work, anonymous review is fine, but as soon as significant or legitimate criticisms appear, you find it necessary to make a point of hunting down and exposing/blackmailing identities.
Nice faux outrage and conclusion-jumping, but you missed the sawdust pit. Doc Perlwitz has posted here (at length) on numerous threads and has even been kind enough to provide backstory on some of his publications. He is not a stranger here.
Martin Lack says:
August 3, 2012 at 3:51 am
Sorry Keith. If the warming signal were spurious, Watts et al would have eliminated it (rather than merely halving it).
I see you have a problem with English comprehension and short-term information retention. The paper addresses the spurious *doubling* of US temperature trends.
To believe otherwise is to admit you think yourself equivalent to Galileo; which you are not.
Heh…
Jan P Perlwitz
Jan you are shouting rather a lot. That suggests things to the listener.
“A wise old owl sat in an oak
the more he saw, the less he spoke,
the less he spoke, the more he heard.
Why can’t we all be like that wise old bird.”
Martin Lack says:
August 3, 2012 at 3:23 am
2. The… final conclusion that adjusted temperature trends are ‘spuriously doubled’ (0.155°C vs. 0.309°C per decade raw vs. adjusted data) relies on a simple assumption — that the raw data must be correct and the homogenised data incorrect.
If the raw data is correct, homogenizing it can produce a flawed product.
If the raw data is incorrect, then no amount of homogenizing will produce correct information.
There is no a priori basis for this assumption and it is unsupported by the literature.
Your statement fails the Fact-Check test:
“Whenever possible, we have used raw data rather than previously homogenized or edited data.”
http://berkeleyearth.org/dataset/
Willis Eschenbach’s written (and been cited) often enough to be considered “part of the literature”:
http://www.google.com/url?sa=t&rct=j&q=raw+data+better+than+homogenized+data&source=web&cd=4&ved=0CFIQFjAD&url=https%3A%2F%2Fceprofs.tamu.edu%2Flbeason%2FThe%2520Smoking%2520Gun%2520At%2520Darwin.doc&ei=PtAbUPO6EsHK2AWXz4HgDw&usg=AFQjCNGcs8So6cygbPJH-tsyqzWW5e0Zww
And then there are the independent analysts:
http://suffolkboy.wordpress.com/dublin-airport-homogenization-of-giss-weather-data/
Well Martin perhaps the issue here is of a semantic nature. Let me say this, the Watts et al. paper indicates that at least 50% of the previously reported warming signal is the consequence of the mathematical manipulation of the data by others. The paper shows this by the simple expedient of grouping the reporting stations by a more rigorous, professionally accepted methodology, and then showing that the adjustments applied run from the worst to the best in a reducing order.
I am not sure why you should find this unacceptable as the data and process used is quite openly stated and the result seems unequivocal. When Mr. Watts makes public his raw data as used I have no doubt it will make replication easier but you can already apply the methods he has used if you wish to put in the effort.
It is unrealistic to think that using the already applied adjustments done by others previously will do anything but replicate their conclusions. This paper is not about TOB’s or error bars, it is purely about the effect on the stated trends of using NOAA methodology. That methodology results in the temperature trend being overstated by 50% or more on the continental USA data. The obvious implication is that if the same methodology was applied to the global reporting stations then the same exaggerated result will appear there as well.
Why this should be a problem is beyond me as almost all of climate science seems to be built upon statistics and statistical manipulation. The Watts et al. paper highlights a significant error in that manipulation. Obviously if this evidence proves to be resistant to replication so be it, but if replication is done then some serious reassessment of the scope of this problem will be required. This shouldn’t be seen as threatening or destabilising but rather as the process we all follow along the path to enlightenment.
Frank K. – Three small notes on the consistency of BEST/USHCN for US surface temperatures:
* Correlation of time series is freshman math – perhaps the BEST method authors felt they didn’t need to include that textbook?
* In the continental US, the 1218 temperature stations average ~82km distance apart. Correlations of temperature anomaly over that distance, at any latitude, are very very strong.
* Most importantly, inaccuracies in the BEST method (whatever they may be), are a side issue to the fact that two different techniques (_1_ statistically identifying outlier changes and treating before/after as different records, vs. _2_ adjustments based upon station metadata and the effects of equipment, location, and observation time changes) produce substantially the same results.
Consistency of two separate methods increases the strength of the results. And indicate that the USHCN adjustments for factors like TOBS are valid.
Keith & Bill – As I have said, if Anthony Watts will act on the constructive criticism of others and then submit his revised paper for peer review and get it published in a reputable journal, we may have a chance of determining what merit there is in his methodology (i.e. assuming you do not think like John Christy that mainstream climate science is a cabal of those willing to tow the party line)… However, judging by Watts’ dismissive response to any criticism and the nature of that criticism, I am not hopeful that he will even try (let alone succeed).
KR
* Correlation of time series is freshman math – perhaps the BEST method authors felt they didn’t need to include that textbook?
What does this strange statement have to do with anything? I didn’t say anything about BEST’s methods…
“Correlations of temperature anomaly over that distance, at any latitude, are very very strong.”
NO! Look at the southern latitudes in Figure 3 of Hansen et al.. Look at the equator. Geez. And what is meant by “very, very strong”? 0.5, 0.6, 0.7, 0.8? Such adjectives are of course meaningless when constructing an algorithm to actually, you know, compute something**.
Funny thing, if you examine Figure 3, you’ll there are stations 5000 km apart that are “highly correlated” and some < 100 km apart that are not.
Now having said this, GISS are free to compute their "average temperature anomalies" any way they want to. If they want to consider stations 1000 km away, sure go for it! Whether it makes any sense at all or is thermodynamically relevant to anything (particularly claims of "warming") is another matter altogether.
—
**This all may be irrelevant anyhow since we don't know how GISS actually calculated their correlations…
Martin Lack says:
August 3, 2012 at 7:22 am
However, judging by Watts’ dismissive response to any criticism and the nature of that criticism, I am not hopeful that he will even try (let alone succeed).
*ahem*
My bolding.
http://wattsupwiththat.com/2012/07/31/watts-et-al-paper-2nd-discussion-thread/#comment-1049696
And…
Again, my bolding.
http://wattsupwiththat.com/2012/07/31/watts-et-al-paper-2nd-discussion-thread/#comment-1049863
You really should work on that English comprehension problem…
This is a rather long comment, so I’ll put the conclusion up front as an abstract:
In the first discussion thread, many people suggested adding the estimate of the uncertainty in the warming trend for each of the data subsets studied.
When you have a series of (x,y) pairs and perform a least squares to estimate a y=mx+b trend line, there is a simple formula for determining the statistical uncertainty in m and confidence ranges.
However, the formula no-longer becomes simple when the x, of the (x,y) pair has an error bar.
The (x,y) in our case is (TempStar, time(months)), with TempStar = (U(TempStar), S(TempStar)).
Where U(tempstar) is estimate of monthly mean
and S(TempStar), mean std error of U(TempStar).
Now, U(TempStar) as use in most climate analysis papers, including in Watts-2012 is the monthly mean T(ave) temperature anomaly after including or excluding certain adjustments.
You COULD do an analysis of the uncertainty in the slope m of the trend line using U(tempstar) and assuming S(tempstar) = 0 . The resulting uncertainty in the slope would be unrealistically, statistically faulty, and absurdly small. I maintain, that you can use this underestimate of the uncertainty only to prove that two slope values of two subsets that are within this underestimate of the uncertainty are NOT significantly different, and therefore there is no reason to assume the two subset are different. However, if the slopes are different by an amount greater than the underestimate of slope uncertainty, you CANNOT and MUST NOT conclude the subsets are different. Assuming S(tempstar) = 0 is simply wrong.
What are the elements that go into S(tempstar)?
Let V(*) = S(*)^2, variance.
A. Let’s start with every temperature reading is to the nearest deg F. so each reading of Temp T has an S = 0.3 deg F (approx). How many temps in a month? 60. Let’s assume independence of reading. S(TempStar_A) = S/Sqrt(60) = 0.3/7.5 = 0.04.
B. TempStar is also a difference between the U(TempMonth) for the month and the baseline Monthly average (U(TempBase)). The difference is the anomaly. Since this is a difference, you must add the variances V(TempStar_A) and V(TempBase). What is V(TempBase)?
This is a bit complicated, but maybe a simple view is to assume it is based upon S(tempstar_A) sampled 1 each year of the study. take a 30 year study. S(TempStar_B) = S(TempStarA)/sqrt(30) = 0.01.
Whether S(TempStar_A) and S(TempStar_B) are independent is a matter of debate. But worst case they are dependent and you add S.
So S(TempStar_A+B) appears to be no worse than 0.05. Not zero, and not insignificant when looking at a slope of 0.10.
C. Here’s the biggie, I’ve saved for last.
When we are measuring temperatures, we are not measuring T(ave) at the station 60 times a day. We are measuring 30 T(maxs) and 30 T(mins). While it is perfectly ok to make 30 T(ave_day) = (T(min)+T(max))/2 as an intermediate value to estimate the monthly mean U(TempStar), you MUST NOT combine daily T(min) and T(max) into a T(ave) to estimate S(TempStar) . S(TempStar_C) is not zero (as you would assume from T(ave).
V(TempStar_C_day) = (T(max)-T(ave))^2 + (T(min)-T(ave))^2)/2
(I think you divide by 2 since it is the full population and not just a sample.)
If there is a 30 degree difference between min and max, then
V(TempStar_C_day) = (15^2+15^2)/2 = 225,
S(TempStar_C_Day) = Sqrt(V) = 15
S(TempStar_C_Month) = 15/sqrt(30) = 4.5 deg F Ooops!
_C dominates the _A and _B.
The mean std error of each monthly estimate S(TempStar) would be 4.5 deg F if the daily min-max range is 30 deg F.
Here’s the problem in a nutshell.
The process has been to first convert the Tmin and Tmax into a daily Tave, having no uncertainty associated with it. Everything from then on uses the Tave and the only uncertainty that is use past that point is what derives from the difference of Tave – Tbase to get an anomally.
But if we go back to what we actuallly measured, the Tmin and Tmax, the uncertainty in the system really is large.
In a manufacturing process, you do analysis on each and every sample. You don’t pick two sequential widgets off the assembly line, average their results, then pick another two sequential widgets, average their results, and then perform results on only their averages. You just don’t do that! But’s that’s what we are doing with Temps.
Now, that I have made my point, I’m going to back off just a bit.
S(TempStar) isn’t zero. S(tempStar) isn’t 4.5 either, because that estimate isn’t a random sample of 60 temperatures in a month; it is a non-random sample of the min and max of each day, outlier points by definition.
The real S(TempStar) is somewhere between zero and 4.5. But even if it is 2 or 1, that is an error estimate that must be addressed included in uncertainty confidence limits of estimated temperature trends and comparing two subsets of temperatures.
In closing, I just want to point out that every adjustment someone makes to a temperature record must add to the uncertainty in the calculated signal, which can dilute the significance of any result. That goes for TOBS.
Correction to Rasey 10:05 above (I transposed the definitions of (x,y):
However, the formula no-longer becomes simple when the y, of the (x,y) pair has an error bar.
The (x,y) in our case is (time(months),TempStar), with TempStar = (U(TempStar), S(TempStar)).
Amazing how so many people have bent over back wards to show why such a simple TOBS adjustment would not work, but have never demonstrated why an offset to all post-midnight observations is justified.
Two sets of max / min temperature instruments at the same location will record the EXACT SAME maximum and minimum events. The only unknown is what date those events happened.
Next time kids, try working with actual data…
24:00 observations: Max and Min temperatures were for the previous date.
08:00 observations: Max temperatures are for the previous date and the Min temperatures are for the date of observation.
For historical records, simply adjust the dates and then you can match the stations to each other once again.
P.S:
Next time you want to show examples of actual data, please post the exact date and time of each maximum and minimum temperature for a specific location. With modern instruments, one minute resolutions is easy of obtain.
Jan P Perlwitz says:
August 3, 2012 at 1:01 am
“name association, innuendo about agendas, conjecture, and conspiracy fantasies, which are spun here now regarding my person.”
So are you saying that you don’t know Judith Perlwitz and that she is no relation? That you don’t sympathise with her co-authors and that this has not colored your own view? That their forthcoming PNAS paper will not have it’s substance put in jeopardy by Watts revelations? That Ben Santer and Tom Wigley were not involved with some dubious practices at the IPCC and Hadleigh CRU, as exposed by the “climategate” e-mails and so on? That they are not co-authors of Judith Perlwitz PNAS submission? That we are supposed to believe you left your prejudices at the door? That you are not employed at Columbia U, and that Judith P. is not employed at Colorado U?
Get Real !
When you are in a hole – stop digging !
Maybe you missed my August
2, 2012 at 10:04 pm
At the beginning of the power point presentation, in the following phrase:
“This study compares [of] the rate of warming of well”
delete the [of].
From Jan P Perlwitz on August 3, 2012 at 1:48 am:
*ahem* You’re being a bit… parsimonious… with the details of what happened on that thread.
One full week after the last comment, you tried to slip in The Last Word using an insulting tone from the start:
Anthony replied, allowing through your “insulting and spiteful comment”.
Smokey made a dispassionate reply. What you said next was lost:
So you found a thread that was dead and ignored by normal site standards, used the opportunity to insult Anthony and the site regulars, which Anthony allowed as this site does have light moderation. I don’t know what you tried to say next, but this site is Anthony’s “home on the internet”, and while he allowed the one comment through, he has no obligation to let through another like the first.
Anthony then closed comments on the previously-dead thread, as is his right.
You were allowed to state your piece, a full week after everyone else was done, decided to insult your host and his guests… And now you’re kvetching you weren’t allowed to say more.
Shall we consider this an example of how you and your colleagues at GISS like to operate?
Anthony,
You wrote — “or speak in third person lagomorphic languages”?
First had to look up “lagomorphic”. Left me in greater confusion. Tried to imagine potential explanations for the phrase. Could not come up with anything safisfyiing. Decided to ask — but continued to scroll down through the thread. Then a few posts below your comment I came across a post by Phil where he gives —
Josh Halpern/Eli Rebett
and with “hare raising” expectations went to the listed site. Hare! Hare! Anthony!
Though almost all are familiar with the story of the Tortoise and the Hare as a child I remember reading a story about a frog and a rabbit contending over their leaping abilities, deciding to have a race. Frogs jump in a straight line and so this frog won the contest. Afterwards the frog taunted the rabbit for being so scatterbrained and leaping all over the place and getting nowhere. The rabbit defended itself by saying it was built into him to jump all over the place because it was the best way to escape predators. This jumping around to excape predators is what we see now in man-made global warming “science” — more commonly expressed as “moving the goal posts”
Anthony, after your “press release” there are a lot of scared rabbits out there.
Eugene WR Gallun
If Inhofe were to “read” your PowerPoint into the Congressional records, would it make a sound?
This post is several days late and dollars short, I’ll put it in the record anyway.
Here is a suggestion for an analysis experiment to test whether the Time of Observation (TOB) is a significant confounder of the station siting observations recorded in Watt’s “beta” draft paper that is the subject of this thread. The primary objective here is emphasizing that uncertainty in warming trend, i.e. the uncertainty in the slope m of y=mx+b , is going to lead to unrealistically small uncertainties if the stats are based upon the un-measured, calculated T(ave) and not based primarily on the measured T(min) and T(max).
Currently, each of the stations used in the study are segmented in the dimensions: Region and Leroy-2010 Station Class (as interpreted by the sufacestations.org team.. The issue of whether TOBS confounds the results means we need some simple way to segment a TOB dimension for each station.
My suggestion is as follows:
Take the time period of the study and break it up into three sub-periods.
It could be 1/3, 1/3, 1/3.
Or 40%, 20%, 40%
Or even 50%, 0%, 50%.
The only requirements are that the 1st and 3rd periods be of equal length and be at least 1/3 of the record.
With each of these sub periods, determine whether the times of observations are primarily AM or PM. Let’s set the divider at 1 PM. If the TOB is after 1PM, it is a PM reading, prior to 1PM, it is an AM reading. In each subset, determine whether the count is more AM or PM. Label that subset “A” or “P”.
Using the 1st and 3rd subsets, segment the TOBS dimension as either
“AA” – Station was mostly AM reading throughout the period.
“AP” – Station switched from AM to PM readings over the study period.
“PA” – Station switched from PM to AM readings over the study period.
“PP” – Station was PM reading mostly throughout the period.
We are analyzing slope and uncertainty of slope of a linear regression and the beginning and ending regions of the period have greater weight to estimating slope. I feel the status of the central sub-period is not important to test the hypotheses.
IF TOBS is a significant element of bias to the raw, unadjusted, temperature trends, then we should confirm the following hypotheses.
1. “AA” and “PP” segments should have an insignificant difference in slope [see next point.}
2. “AP” segments should have a statistically sig. greater slope than all others.
3. “PA” segments should have a statistically sig. lesser slope than all others.
This issue of the significantly significant difference in slope is one I wrote about two days ago upthread. What I concluded in that post is to fit a set of temperature readings to a y=mx+b regression:
1. You can use the T(ave) (raw) in the calculation of the uncertainty of slope as a very conservative, underestimate of that uncertainty. Let’s call that S_a(m), std dev based upon T(ave).
2. If two slopes fit within that uncertainty bands, then there is no reason to suggest they represent two different populations.
3. But if two slopes are different by more than S_a(m) can justify, you CANNOT yet conclude the populations are different. This is because the uncertainty in the calculation of T(ave) is assumed to be zero when in fact is quite large.
4. You can also use the T(min)-T(base) and T(max)-T(base) as the points used in determining the slope and uncertainty of slope, which we will call S_m(m) for std dev based upon max and min point. You ought to get the same slope as in 1, but the uncertainty of the slope better reflects the real uncertainty since T(min) and T(max) are the actual measurements. S_m(m) >> S_a(m).
5. I say “better reflects” uncertainty than S_a(m), but I recognize that it is probably an over estimate since the T(min) and T(max) are not randomly sampled points during the day but are outlier temperatures.
6. If two slopes are different by more than S_m(m), then the segments come from two different populations and the TOB is vital to the analysis.
7. I expect what will happen is that the difference of slopes will show a significant difference based upon S_a(m), so we cannot conclude they are the same populations, but the differences will be insignificant based upon S_m(m), so we should not conclude the populations are different. This leads into an area were we cannot yet conclude either.
8. If we find ourselves in the situation of 7, I would propose that we create some additional pseudorandom temperature samples. We have T(min), T(max). We create T(ave_1) = (T(max) + T(min previous))/2.
T(ave_2) =(T(max) + T(min next))/2
In one sense, instead of sampling at 90 and 270 degrees in the cycle (max and min) we sample at 0, 90, 180, 270 points in the daily cycle. Another way of looking at it is that the T(ave_1) and T(ave_2) are real temperatures, the sampling error is WHEN during the day they happen, uncertainty in x-dimension of a couple hours in a 30 year time series. I think we could live with that. Then do the slope and uncertainty analysis using
T(min)-T(base),T(max) -T(base),T(ave_1) -T(base), T(ave_2) -T(base). Calculate the uncertainty of the slope S_4(m) based upon the four “readings” during each day. We should be S_m(m) > S_4(m) >> S_a(m).
Axel wrote:
Is this how it works? I’m asked a bunch of rhetorical questions that formulate (unproven) accusations against someone else, construct an association between this someone else and me through a third person who worked with this someone else, speculate about my sympathies to the other persons, which altogether is supposed to be the alleged evidence that I was a part of some sinister conspiracy against the heroic Anthony Watts. And what if I refute to answer your inquisitorial questions? Is this the proof that the conspiracy was real, instead of being only a figment of your imagination?
Are you dreaming of show trials in the future and be part of it against all those climate scientists, which you believe to participate in some evil world conspiracy that invented the “global warming hoax”, and you are doing some training for it a little bit already?
No, I didn’t miss it. The conspiracy fantasy in their about me was one of the comments, which I meant.
Although I don’t understand what the listed models there are supposed to have to do with Watts et al. “revelations”, which I expect to likely blow up in smoke anyway due to the methodological and logical flaws of Watts et al.’s analysis that I see.
[Snip. Stating as a fact that Anthony Watts “misleads the audience” is not acceptable here. You say similar things on other blogs. But you are not getting comments like that approved here. Any reference to this deletion will also be deleted. ~dbs, mod.]
Overview -Watts et al Station Siting 8-3-12 (PPT) UPDATED
I can not access the updated version of OVERVIEW PPT. The link goes to the older version 29-7-12. Is there another link somewhere?