@ur momisugly Steven Mosher says: December 12, 2013 at 12:18 pm
Raw data is not crap. It may not tell you much until it is analyzed, but raw data is raw data. Manipulated data is crap. It destroys the information the raw data contains. Regardless of what you want it to tell you.
MarkB says:
December 12, 2013 at 2:12 pm The issue with satellite measurements isn’t so much the accuracy of the measurement but figuring out precisely what region of the atmosphere has been measured.
And that’s it – the satellite measurements of atmosphere are of course useful, but we must always bear in mind it is measurements of the atmosphere layers which is not always clear what are their boundaries (and there is relatively very steep adiabatic lapse rate in the troposphere, changing considerably also with water content, so it is of crucial importance) and it is not same as the surface air and SST in-situ measurements.
DirkH
December 13, 2013 5:04 am
Steven Mosher says:
December 12, 2013 at 10:37 pm
“You like satellite data?
Uah stitches together various satellites by making adjustments to data. For example orbital decay.
And uah doesnt measure temparature. Its raw data is a voltage. This gets turned into a temperature by applying a physics model. That model is also the same model that says co2 warms the planet. ”
The last sentence is a lie. The model says that CO2 absorbs and emits certain IR frequencies, not more and not less. You slip in such lies all the time. That’s why I think you either have a political agenda or a very muddled thought process.
Mindert Eiting
December 13, 2013 5:10 am
One year ago I have read that about thirty percent of the surface stations in the USA show an individual cooling trend (negative regression slope). The first question you should ask is whether there is a difference between distributions of station regression slopes within a certain data set and within drop-out stations. In the last decades of the twentieth century more than 80 percent of all stations were dropped and, because of inclusion, almost no station of the old population survived. If drop-out were random or determined by coverage, the effects would be harmless, but it can be shown that stations were dropped on the basis of their past records. It is the worst kind of drop-out which cannot be repaired by statistical technicalities. From noise from multiple sources you can create any signal you want by source selection. Take a random sample from the drop-out stations and compare mean and variance of slope distributions over a certain period with mean and variance of that distribution in a data set. Has this be done for new BEST?
mwgrant
December 13, 2013 6:13 am
DirkH
“The last sentence is a lie. The model says that CO2 absorbs and emits certain IR frequencies, not more and not less. You slip in such lies all the time.”
The language here is why I think your biases are operating at full capacity. And BTW the second sentence in the quote here is is ambiguous at best and incorrect at worst. I’ll be charitable and assume the former.
Anton Eagle says:
December 12, 2013 at 2:08 pm
Why this has to be explained to “scientists” is beyond me. Anyone that publishes any climate article using anything other than raw data is, simply put, not a scientist. Instead, they are simply a propagandist.
===========
One of the very best examples of how to keep records are the old manual ledgers kept by accountants before computerization. In such a ledger you NEVER update or delete any record. All changes to the data must be done via inserts. In this fashion a full audit trail is preserved.
Using an eraser to delete or update any data in a manual ledger was forbidden. It was considered “cooking” the books. Instead, if you wanted to make a correction you inserted two new records. One was a reversal for the previous record, the other a corrected amount.
The problem is that in the age of computers, we started using updates and deletes to “correct” data errors. The problem is that the corrections themselves are also full of errors. These are processing errors as opposed to the data errors in the raw data. And if you do update or delete any data, then you can no longer identify data errors from processing errors.
So, if you are working with any data that contains any updates or deletes you are working with garbage, because you cannot separate data errors from processing errors. Any attempt to place error bounds around the data is a nonsense, because you cannot determine the error rate, because your calculations assume the processing error rate to be zero, which is a false assumption.
Bill Illis
December 13, 2013 6:31 am
Many of you would have, at one time, built a database of the temperature record where you live. Maybe its got some problems. Maybe it is good.
You can compare that to what Berkeley Earth (BEST) has come up with.
At this site: just type in your city name. You’ll get a link to the temperature record which Berkeley uses, maybe several individual stations (and it may have the raw data from your location, but I would use the actual raw record of your location that you have built). http://berkeleyearth.lbl.gov/city-list/
You’ll find that Berkeley has a higher temperature increase trend than you have.
Reg Nelson says:
December 12, 2013 at 1:55 pm “NASA claims the accuracy of the satellite measurements are within 0.03 C. Do you have evidence to suggest otherwise?”
The question is what is measured there. Are you sure it is the surface temperature as measured by in-situ measurements? I very much don’t think so. Also the 2/3rds+ of the Earth surface area is the ocean and the problems with the satellite measurements of the SST are endemic and principial, because the satellite instruments based on IR an microwave sensing are unable to “see” beyond the very surface skin of the ocean where the temperature due to insolation and eveporation is usually not representative for temperature of the layer where in-situ measurements are made. Not speaking any cloudiness immediately makes even this “100 micrometers SST” unmeasurable from a satellite.
And even if this all would be solved one day, which I rather doubt, still there is lack of historic record anyway. So I still think, that the in-situ measurements have its value and especially when it comes to the long historical instrumental time series (as the CET or the Klementinum I mentioned here) -the series should be treated with much more respect than is usual “raw” data treatment with the global temperature composites.
With the ocean I think it is much more telling the data from say the ARGO than satellites. Even for the relatively short period it covers the oceans (last 10 years) it already consistently shows things the satellites in my opinion never would be able to measure, not speaking with sufficient accuracy.
When I touched it – btw. ARGO shows a consistent cooling of the sea surface layer for whole its era since 2004, the downward slopes, especially in the upper photic layer are considerably steeper even than HadSST3 (and the ARGO global SST signal really much more resembles what’s predicted theoretically) and when it comes to yearly temperature maxima caused quite clearly by the Earth perihelion coinciding closely with summer beginning at the southern hemisphere (where bulk of the ocean resides) the cooling is already identifiable even under the photic layer botom at 150m depth. In my opinion the ARGO in-situ data show not only striking dependence of the ocean surface layer temperature on the insolation variation, but more or less “temperature history of the ocean insolation” couple of decades long as the heat waves progress down – which is also quite very well identifiable in the ARGO data despite partially obscured by yearly insolation variation pattern – which nevertheless leaves no space for doubt about intimate dependence of the OHC and resulting temperature in the ocean photic layer on the insolation variation. So if there’s a candidate for adding to the BE to make it truly global temperature dataset in my opinion the ARGO in-situ ocean surface layer data would be much better choice than satellite records.
beng
December 13, 2013 6:34 am
If it’s the “new” BEST, the old BEST wasn’t the best. Best that….
So, uh… I saw a comment about how “uah doesnt measure temparature. Its raw data is a voltage. This gets turned into a temperature by applying a physics model” which sounds like there is “then a wizard does something” step inserted in there?
I have to say, electricity isn’t magic.
If a receiver shows a certain voltage change and is properly designed then you can eliminate other causes for that voltage change until you can safely assume that there was a specific type of signal received.
Assuming then that people aren’t standing around outside with open microwave ovens trying to spoof your detector you can look at what sort of phenomenon can cause microwave emissions of that type and characterize said emissions accordingly.
If a randomized band of frequencies within certain parameters are typical of thermal emission by bodies at a certain temperature you can then point to a detection of that signal and say “there is a body within the view of this instrument at that temperature, or some jerk is going through a lot of effort to generate noise with certain properties to try and throw off our data collection, or the instrument is broken”, what you can’t do is say “oh that’s a load of crap because microwave sounding isn’t a direct detection, it’s just interpreting signals”… at least not with any sort of credibility.
You aren’t actually seeing these words on the screen, they’re just signals received by your retina and interpreted by the relevant structures in your brain which then pass a note to the homunculus sitting in your skull which says “hey jerk, you saw this” and you register it as an impression of light and dark patterns on a screen representing certain information, which again is interpreted secondhand by the relevant processing routines inside your head, which pass a note back to the homunculus in your head which says “hey jerk that stuff you saw means this”, and then you understand these signals and interpret it accordingly.
The physics involved in microwave sounding is pretty old stuff involving the response of resistors to changes in a linked antenna, light pressure, black body calibration, and so forth. So arguing that “it is just a change in voltage” and then jumping to “and that is interpreted by the same models as the ones that say CO2 is warming the planet” is rather nonsensical.
You can understand the responses of a microwave sounder in terms of some electrical engineering and a bit of quantum mechanics, we’re talking stuff that was around in the 40’s, when the systems were theorized and developed.
To go with Max’s post above, what do you think electronic thermometers generate? Thermocouples generate a voltage output that’s measured as a temp, thermistors resistance varies with temp, and are used to generate, wait for it, a varying voltage that’s measured as a temp. You can use a diode as a thermometer, you can even use a photodiode in your digital camera as a thermometer.
The issue with satellite measurements, is you don’t really measure surface temps.
One of the most glaring examples of the problem with using updates and deletes instead of inserts was the CRU at East Anglia. The foremost temperature record in the world, yet it was discovered they no longer have the raw data. All they have is the processed data. As such there is no way to determine the processing error rate in their results, and no way any faith can be placed in the quality of their data. It is simply not suitable to purpose.
To give an example, say a company published a year end financial report, and then destroyed all the raw data behind the report. Would you trust the financial report? Of course not, because you would suspect an Enron. You would suspect that the raw data had been destroyed to hide the truth, that the financial reports were a result of creative accounting that would not stand up to scrutiny.
As a result of Enron we got Sarbanes–Oxley. Criminal penalties for falsification of accounting data. Yet we are being asked to change the economies of the world based on temperature data that holds absolutely no assurance of data quality. No independent audit. Absolutely no penalties for data manipulation or fraud.
Temperature data that in many cases has been actively hidden from the public, even though it was paid for by the public. And as Climategate showed, leading Climate Scientists were involved in a conspiracy to withhold the raw data from the public, and have never been held to account for their actions. So why would anyone think they have corrected their behavior?
Marc77
December 13, 2013 6:47 am
They should not use breakpoints. It makes no sense to add an offset to many years of record because of a small jump.
Stations are known to measure temperatures lower than a simple thermometer hatched to your house. This is because the Stevenson screen limits the amount of warming in different ways. So when the screen slowly decays, the station shows an artificial warming. This is not corrected because it is too slow. When the station is renovated, there is a sudden cooling and it will be corrected because it shows as a jump in the record.
beng
December 13, 2013 6:51 am
***
Bill Illis says:
December 13, 2013 at 6:31 am
***
Bill, Frostburg, MD is near me & still fairly small & presumably a small UHI effect. Somehow BEST turns raw data showing slight long-term cooling into a warming trend: http://berkeleyearth.lbl.gov/stations/34727
What a surprise. /sarc
Stephen Richards says:
December 13, 2013 at 1:23 am
The three of them have demonstrated an attachment to the AGW même as solid as Hansen and Mann. No respect for them I’m afraid.
———————————————————–
There is a very big difference between someone you disagree with and someone trying to sell you snake oil. Confusing the two makes you look bad. The BEST project is very trasparent, if they are selling snake oil then they are at least advertising it as such. In the many years I have been following climate science I have found many people who do not appear to be honest and sincere in their opinions. Mosh isn’t one of them (and not even close for that matter). Prove him wrong or thank him for the work that he does.
One of the biggest problems in “correcting” raw data is that we rarely look for processing errors when the results match our expectations. As a result processed data is almost always biased in the direction of the expectations of those controlling the data processing.
This is not a concious process. It is sub-concious. As a result the bias cannot be detected by those doing the processing. Thus the need for an independent audit of all data processing results. Even this is problematic, because the auditors will be blind to errors that match their expectations.
Joseph Murphy says:
December 13, 2013 at 6:56 am
In the many years I have been following climate science I have found many people who do not appear to be honest and sincere in their opinions.
=============
bias due to experimenter expectation effect is independent of honesty or sincerity. In fact, honesty and sincerity can make the problem worse, because you are more likely to suspect errors in the work of a dishonest or insincere person. However, even the most honest and sincere among us still have bias. We all do, and it is the in-built bias that blinds us to error that match our bias. Thus the failing of peer review to catch errors when the author and reviewers share the same bias.
Joseph Murphy
December 13, 2013 7:36 am
@ferd berple
Agreed, but in the case you mentioned one should disprove the bias, not dismiss the work out of hand. Dismissing BEST because you believe there is a biased without demonstrating so shows more bias with you (a general ‘you’) than with them.
Sleepalot
December 13, 2013 7:37 am
@ur momisugly Wayne Thanks for the reply – I usually get met with silence.
bit chilly
December 13, 2013 7:58 am
from papers i have recently read it would seem the earth,s energy budget is never in equilibrium,there is either net gain or loss as would be expected in a chaotic system.
to my mind any increase in temperature in the arctic could well be seen as a first indicator cooling is on the way.whether it be air or water,moving towards the arctic will end in one result,it will cool.during that cooling period the average temperature of the sea and air may well rise a small amount for an indeterminate period,but eventually it will return to cooling.
simple reasoning that requires a leap of faith,but i believe that is what is currently happening.with southern hemisphere ice on the increase and indications the arctic is getting cooler,i am fairly certain i will find out within the next ten years.
Stephen Rasey
December 13, 2013 8:12 am
@Steven Mosher at 10:20 pm Stephan Rasey. I dont think you understand what the process is.
Steven chooses two cases to justify BEST methodlogy.
1. Moving a station from a mountain top into a valley.
2. Changing the Time of Observation
Is that the BEST you can do? How many stations fit your example #1?
Of course #1 ought to be a new station record.
Let’s face it, it would be far better to have two stations with overlapping records. Given the Great Thermometer Dying, it is an open question in my mind how many overlapping records have been eliminated.
I’m skeptical of TOBS adjustments in general. I think they are overwrought and an excuse to adjust data. TOBS adjustments in part assumes that thousands of volunteers faithfully recording temperatures year after year were idiots recording daily min and maxes without regard to the day’s weather.
In either case, splitting the record is masking instrument drift as real temerature change. Unless you know the drift from a recalibration done at the end of the record, you have no idea what it is. Since BEST itself is finding breakpoints, you don’t have this recalibration at the end of a record.
Let’s take a more realistic example of #1.
You have a Stevenson screen at an airport.
The Stevenson screen gets moved to another place at the airport because airport expansion and growth in activity over the years reduced the siting criteria from a Class 2 to a Class 5. The screen’s new location is now a Class 1. SHOULD it be a split? In my book, it is a much tougher call because the act of moving the station is one of recalibration and restoration of the long term local climate. Preservation of low frequency data content is paramount, so the bias should be to not split the record
How many temperature stations are in the BEST dataset?
What percentage of them have already been adjusted by others?
How many breakpoints (slices) did BEST create on that dataset?
What is the distribution of segment lengths after the breakpoints are applied?
(What is the histogram of segment lengths in bins of 2 year widths. )
What is the average lenght of segment? I’ve read report that it is 12 years.
When the segment slopes are subject to krieging, is there a weighting in the estimated trend that give greater weight to longer segment lengths?
How many temperature segment breakpoints do you KNOW fit either of the two categories above? You don’t have the metadate to justify most of these breakpoints.
I recently adjusted a 1000 year dataset of observations of the daytime sky.
While past observers stated various shades of blue with white clouds they did not have color calibration tools like we have today (like Pantone color charts). I adjusted those further out to be more accurate. The adjusted data (that is, the more accurate data) clearly shows that the sky in the past was maroon and the clouds gold. The current blue / white scheme is clearly a severe anomaly and we all need to make drastic changes to our lifestyles to restore it.
When do I pick up my Nobel Prize?
Thanks for the explanation Mosher. I think the slicing is ok but nor sure how it is affects things as I am not any better than any layman at reading a graph and interpreting data.
As for the graphs, they are what they are. Perhaps the BEST but we have but some who think otherwise. Time will tell as someone else commented.
I once wrote a financial program to predict company revenues based on a sum of all projects across several companies and projection of all the companies projects given their progress and cost to date. (And the shape of typical project cost completions curves for various size projects and components.) The employees called it the “Delbeke Lie Detector” because many would get calls from head office asking how much trouble their project was in before they recognized trouble as we could parametrize a few simple things and see when projects were going off the rails. They hated it and loved it at the same time and it helped corporate ensure we didn’t get too many financial surprises. Some folks would try to work the system, but they always failed in the end. I have been retired for many years but the system is still in place.
I hope Climate Science hasn’t fallen into the trap of trying to work the system ’cause the chickens always come home to roost in the long term.
Thanks to all the contributors to WUWT including MOSH for keeping my brain active.
There have been other published methods. Courtillot’s group, for instance, has papers that estimate the land average by selecting only stations with good data. The temperature curve takes a different shape, especially, IIRC, over Europe.
Secondly, the fallacy of calculating a point in a field from its neighbours, by whatever method, does not address the circularity involved, regardless of the superiority of the used method.
Steven Mosher says:
December 12, 2013 at 10:37 pm
Reg.
You like satellite data?
Uah stitches together various satellites by making adjustments to data. For example orbital decay.
And uah doesnt measure temparature. Its raw data is a voltage. This gets turned into a temperature by applying a physics model. That model is also the same model that says co2 warms the planet. I bet you thought uah was data. Its not. Its adjusted modelled outputs. Go read the theory behind satellite data.
+++++++++++
Actually Steve, you are artfully one of the finest verbal prestidigitators I’ve had the honor of conversing with. You’re brain is wicked smart.
The satellites (that you seem to poo poo) use the most precise type of electronic temperature sensors known to man. They are called RTDs. You are correct, as RTDs use the resistance measured across a platinum resistor which is excited by a known small current. Said another way, the raw data comes from measuring the resistance by applying precise current and measuring the voltage. There are three physical constants referred to as Alpha, Delta and Beta which reflect the properties of the platinum. As well, sensors are calibrated using for scale and offset before they are put into service. If a 3 or 4 wire RTD is used, the controller also needs to constantly measure the resistance of the leads to the RTD so that the resulting voltage measurement reflects the temperature. A 3 wire assumes both leads have the same resistance. A 4 wire measures resistance of both leads. The resistance of the leads is subtracted from the resistance of the RTD so that only the resistance of the sensor is used in the “calculation” of temperature.
Further, the current that flows through the platinum needs to be low enough such that it does not heat up the sensor significantly enough to hurt the measurement. The heating is now a days several orders of magnitude lower than the resulting reading.
So your attempt at obfuscating satellite sensors with your statement they the raw data is a voltage, is telling that you have said nothing towards addressing people’s questions.
Take what Janice clearly said about you to heart and prove her wrong.
I asked you some specific questions after summarizing what I think BEST does. You say things, but do not answer. Either: you do not know, but won’t say that or you do know, but do not want us knowing what you know. Why must you go through gyrations, like I think BEST does, to avoid cogent discussion.
I assume at this point that you know BEST starts with bad data (because it refuses to get rid of he poorly sited stations.) They slices and estimates after trusting that bad data should be used.
@ur momisugly Steven Mosher says: December 12, 2013 at 12:18 pm
Raw data is not crap. It may not tell you much until it is analyzed, but raw data is raw data. Manipulated data is crap. It destroys the information the raw data contains. Regardless of what you want it to tell you.
MarkB says:
December 12, 2013 at 2:12 pm
The issue with satellite measurements isn’t so much the accuracy of the measurement but figuring out precisely what region of the atmosphere has been measured.
And that’s it – the satellite measurements of atmosphere are of course useful, but we must always bear in mind it is measurements of the atmosphere layers which is not always clear what are their boundaries (and there is relatively very steep adiabatic lapse rate in the troposphere, changing considerably also with water content, so it is of crucial importance) and it is not same as the surface air and SST in-situ measurements.
Steven Mosher says:
December 12, 2013 at 10:37 pm
“You like satellite data?
Uah stitches together various satellites by making adjustments to data. For example orbital decay.
And uah doesnt measure temparature. Its raw data is a voltage. This gets turned into a temperature by applying a physics model. That model is also the same model that says co2 warms the planet. ”
The last sentence is a lie. The model says that CO2 absorbs and emits certain IR frequencies, not more and not less. You slip in such lies all the time. That’s why I think you either have a political agenda or a very muddled thought process.
One year ago I have read that about thirty percent of the surface stations in the USA show an individual cooling trend (negative regression slope). The first question you should ask is whether there is a difference between distributions of station regression slopes within a certain data set and within drop-out stations. In the last decades of the twentieth century more than 80 percent of all stations were dropped and, because of inclusion, almost no station of the old population survived. If drop-out were random or determined by coverage, the effects would be harmless, but it can be shown that stations were dropped on the basis of their past records. It is the worst kind of drop-out which cannot be repaired by statistical technicalities. From noise from multiple sources you can create any signal you want by source selection. Take a random sample from the drop-out stations and compare mean and variance of slope distributions over a certain period with mean and variance of that distribution in a data set. Has this be done for new BEST?
DirkH
“The last sentence is a lie. The model says that CO2 absorbs and emits certain IR frequencies, not more and not less. You slip in such lies all the time.”
The language here is why I think your biases are operating at full capacity. And BTW the second sentence in the quote here is is ambiguous at best and incorrect at worst. I’ll be charitable and assume the former.
Anton Eagle says:
December 12, 2013 at 2:08 pm
Why this has to be explained to “scientists” is beyond me. Anyone that publishes any climate article using anything other than raw data is, simply put, not a scientist. Instead, they are simply a propagandist.
===========
One of the very best examples of how to keep records are the old manual ledgers kept by accountants before computerization. In such a ledger you NEVER update or delete any record. All changes to the data must be done via inserts. In this fashion a full audit trail is preserved.
Using an eraser to delete or update any data in a manual ledger was forbidden. It was considered “cooking” the books. Instead, if you wanted to make a correction you inserted two new records. One was a reversal for the previous record, the other a corrected amount.
The problem is that in the age of computers, we started using updates and deletes to “correct” data errors. The problem is that the corrections themselves are also full of errors. These are processing errors as opposed to the data errors in the raw data. And if you do update or delete any data, then you can no longer identify data errors from processing errors.
So, if you are working with any data that contains any updates or deletes you are working with garbage, because you cannot separate data errors from processing errors. Any attempt to place error bounds around the data is a nonsense, because you cannot determine the error rate, because your calculations assume the processing error rate to be zero, which is a false assumption.
Many of you would have, at one time, built a database of the temperature record where you live. Maybe its got some problems. Maybe it is good.
You can compare that to what Berkeley Earth (BEST) has come up with.
At this site: just type in your city name. You’ll get a link to the temperature record which Berkeley uses, maybe several individual stations (and it may have the raw data from your location, but I would use the actual raw record of your location that you have built).
http://berkeleyearth.lbl.gov/city-list/
You’ll find that Berkeley has a higher temperature increase trend than you have.
Reg Nelson says:
December 12, 2013 at 1:55 pm
“NASA claims the accuracy of the satellite measurements are within 0.03 C. Do you have evidence to suggest otherwise?”
The question is what is measured there. Are you sure it is the surface temperature as measured by in-situ measurements? I very much don’t think so. Also the 2/3rds+ of the Earth surface area is the ocean and the problems with the satellite measurements of the SST are endemic and principial, because the satellite instruments based on IR an microwave sensing are unable to “see” beyond the very surface skin of the ocean where the temperature due to insolation and eveporation is usually not representative for temperature of the layer where in-situ measurements are made. Not speaking any cloudiness immediately makes even this “100 micrometers SST” unmeasurable from a satellite.
And even if this all would be solved one day, which I rather doubt, still there is lack of historic record anyway. So I still think, that the in-situ measurements have its value and especially when it comes to the long historical instrumental time series (as the CET or the Klementinum I mentioned here) -the series should be treated with much more respect than is usual “raw” data treatment with the global temperature composites.
With the ocean I think it is much more telling the data from say the ARGO than satellites. Even for the relatively short period it covers the oceans (last 10 years) it already consistently shows things the satellites in my opinion never would be able to measure, not speaking with sufficient accuracy.
When I touched it – btw. ARGO shows a consistent cooling of the sea surface layer for whole its era since 2004, the downward slopes, especially in the upper photic layer are considerably steeper even than HadSST3 (and the ARGO global SST signal really much more resembles what’s predicted theoretically) and when it comes to yearly temperature maxima caused quite clearly by the Earth perihelion coinciding closely with summer beginning at the southern hemisphere (where bulk of the ocean resides) the cooling is already identifiable even under the photic layer botom at 150m depth. In my opinion the ARGO in-situ data show not only striking dependence of the ocean surface layer temperature on the insolation variation, but more or less “temperature history of the ocean insolation” couple of decades long as the heat waves progress down – which is also quite very well identifiable in the ARGO data despite partially obscured by yearly insolation variation pattern – which nevertheless leaves no space for doubt about intimate dependence of the OHC and resulting temperature in the ocean photic layer on the insolation variation. So if there’s a candidate for adding to the BE to make it truly global temperature dataset in my opinion the ARGO in-situ ocean surface layer data would be much better choice than satellite records.
If it’s the “new” BEST, the old BEST wasn’t the best. Best that….
So, uh… I saw a comment about how “uah doesnt measure temparature. Its raw data is a voltage. This gets turned into a temperature by applying a physics model” which sounds like there is “then a wizard does something” step inserted in there?
I have to say, electricity isn’t magic.
If a receiver shows a certain voltage change and is properly designed then you can eliminate other causes for that voltage change until you can safely assume that there was a specific type of signal received.
Assuming then that people aren’t standing around outside with open microwave ovens trying to spoof your detector you can look at what sort of phenomenon can cause microwave emissions of that type and characterize said emissions accordingly.
If a randomized band of frequencies within certain parameters are typical of thermal emission by bodies at a certain temperature you can then point to a detection of that signal and say “there is a body within the view of this instrument at that temperature, or some jerk is going through a lot of effort to generate noise with certain properties to try and throw off our data collection, or the instrument is broken”, what you can’t do is say “oh that’s a load of crap because microwave sounding isn’t a direct detection, it’s just interpreting signals”… at least not with any sort of credibility.
You aren’t actually seeing these words on the screen, they’re just signals received by your retina and interpreted by the relevant structures in your brain which then pass a note to the homunculus sitting in your skull which says “hey jerk, you saw this” and you register it as an impression of light and dark patterns on a screen representing certain information, which again is interpreted secondhand by the relevant processing routines inside your head, which pass a note back to the homunculus in your head which says “hey jerk that stuff you saw means this”, and then you understand these signals and interpret it accordingly.
The physics involved in microwave sounding is pretty old stuff involving the response of resistors to changes in a linked antenna, light pressure, black body calibration, and so forth. So arguing that “it is just a change in voltage” and then jumping to “and that is interpreted by the same models as the ones that say CO2 is warming the planet” is rather nonsensical.
You can understand the responses of a microwave sounder in terms of some electrical engineering and a bit of quantum mechanics, we’re talking stuff that was around in the 40’s, when the systems were theorized and developed.
To go with Max’s post above, what do you think electronic thermometers generate? Thermocouples generate a voltage output that’s measured as a temp, thermistors resistance varies with temp, and are used to generate, wait for it, a varying voltage that’s measured as a temp. You can use a diode as a thermometer, you can even use a photodiode in your digital camera as a thermometer.
The issue with satellite measurements, is you don’t really measure surface temps.
One of the most glaring examples of the problem with using updates and deletes instead of inserts was the CRU at East Anglia. The foremost temperature record in the world, yet it was discovered they no longer have the raw data. All they have is the processed data. As such there is no way to determine the processing error rate in their results, and no way any faith can be placed in the quality of their data. It is simply not suitable to purpose.
To give an example, say a company published a year end financial report, and then destroyed all the raw data behind the report. Would you trust the financial report? Of course not, because you would suspect an Enron. You would suspect that the raw data had been destroyed to hide the truth, that the financial reports were a result of creative accounting that would not stand up to scrutiny.
As a result of Enron we got Sarbanes–Oxley. Criminal penalties for falsification of accounting data. Yet we are being asked to change the economies of the world based on temperature data that holds absolutely no assurance of data quality. No independent audit. Absolutely no penalties for data manipulation or fraud.
Temperature data that in many cases has been actively hidden from the public, even though it was paid for by the public. And as Climategate showed, leading Climate Scientists were involved in a conspiracy to withhold the raw data from the public, and have never been held to account for their actions. So why would anyone think they have corrected their behavior?
They should not use breakpoints. It makes no sense to add an offset to many years of record because of a small jump.
Stations are known to measure temperatures lower than a simple thermometer hatched to your house. This is because the Stevenson screen limits the amount of warming in different ways. So when the screen slowly decays, the station shows an artificial warming. This is not corrected because it is too slow. When the station is renovated, there is a sudden cooling and it will be corrected because it shows as a jump in the record.
***
Bill Illis says:
December 13, 2013 at 6:31 am
***
Bill, Frostburg, MD is near me & still fairly small & presumably a small UHI effect. Somehow BEST turns raw data showing slight long-term cooling into a warming trend:
http://berkeleyearth.lbl.gov/stations/34727
What a surprise. /sarc
this posting by Willis is worth a re-read
http://wattsupwiththat.com/2011/11/27/an-open-letter-to-dr-phil-jones-of-the-uea-cru/
Stephen Richards says:
December 13, 2013 at 1:23 am
The three of them have demonstrated an attachment to the AGW même as solid as Hansen and Mann. No respect for them I’m afraid.
———————————————————–
There is a very big difference between someone you disagree with and someone trying to sell you snake oil. Confusing the two makes you look bad. The BEST project is very trasparent, if they are selling snake oil then they are at least advertising it as such. In the many years I have been following climate science I have found many people who do not appear to be honest and sincere in their opinions. Mosh isn’t one of them (and not even close for that matter). Prove him wrong or thank him for the work that he does.
One of the biggest problems in “correcting” raw data is that we rarely look for processing errors when the results match our expectations. As a result processed data is almost always biased in the direction of the expectations of those controlling the data processing.
This is not a concious process. It is sub-concious. As a result the bias cannot be detected by those doing the processing. Thus the need for an independent audit of all data processing results. Even this is problematic, because the auditors will be blind to errors that match their expectations.
Joseph Murphy says:
December 13, 2013 at 6:56 am
In the many years I have been following climate science I have found many people who do not appear to be honest and sincere in their opinions.
=============
bias due to experimenter expectation effect is independent of honesty or sincerity. In fact, honesty and sincerity can make the problem worse, because you are more likely to suspect errors in the work of a dishonest or insincere person. However, even the most honest and sincere among us still have bias. We all do, and it is the in-built bias that blinds us to error that match our bias. Thus the failing of peer review to catch errors when the author and reviewers share the same bias.
@ferd berple
Agreed, but in the case you mentioned one should disprove the bias, not dismiss the work out of hand. Dismissing BEST because you believe there is a biased without demonstrating so shows more bias with you (a general ‘you’) than with them.
@ur momisugly Wayne Thanks for the reply – I usually get met with silence.
from papers i have recently read it would seem the earth,s energy budget is never in equilibrium,there is either net gain or loss as would be expected in a chaotic system.
to my mind any increase in temperature in the arctic could well be seen as a first indicator cooling is on the way.whether it be air or water,moving towards the arctic will end in one result,it will cool.during that cooling period the average temperature of the sea and air may well rise a small amount for an indeterminate period,but eventually it will return to cooling.
simple reasoning that requires a leap of faith,but i believe that is what is currently happening.with southern hemisphere ice on the increase and indications the arctic is getting cooler,i am fairly certain i will find out within the next ten years.
@Steven Mosher at 10:20 pm
Stephan Rasey. I dont think you understand what the process is.
Steven chooses two cases to justify BEST methodlogy.
1. Moving a station from a mountain top into a valley.
2. Changing the Time of Observation
Is that the BEST you can do? How many stations fit your example #1?
Of course #1 ought to be a new station record.
Let’s face it, it would be far better to have two stations with overlapping records. Given the Great Thermometer Dying, it is an open question in my mind how many overlapping records have been eliminated.
I’m skeptical of TOBS adjustments in general. I think they are overwrought and an excuse to adjust data. TOBS adjustments in part assumes that thousands of volunteers faithfully recording temperatures year after year were idiots recording daily min and maxes without regard to the day’s weather.
In either case, splitting the record is masking instrument drift as real temerature change. Unless you know the drift from a recalibration done at the end of the record, you have no idea what it is. Since BEST itself is finding breakpoints, you don’t have this recalibration at the end of a record.
Let’s take a more realistic example of #1.
You have a Stevenson screen at an airport.
The Stevenson screen gets moved to another place at the airport because airport expansion and growth in activity over the years reduced the siting criteria from a Class 2 to a Class 5. The screen’s new location is now a Class 1. SHOULD it be a split? In my book, it is a much tougher call because the act of moving the station is one of recalibration and restoration of the long term local climate. Preservation of low frequency data content is paramount, so the bias should be to not split the record
How many temperature stations are in the BEST dataset?
What percentage of them have already been adjusted by others?
How many breakpoints (slices) did BEST create on that dataset?
What is the distribution of segment lengths after the breakpoints are applied?
(What is the histogram of segment lengths in bins of 2 year widths. )
What is the average lenght of segment? I’ve read report that it is 12 years.
When the segment slopes are subject to krieging, is there a weighting in the estimated trend that give greater weight to longer segment lengths?
How many temperature segment breakpoints do you KNOW fit either of the two categories above? You don’t have the metadate to justify most of these breakpoints.
I recently adjusted a 1000 year dataset of observations of the daytime sky.
While past observers stated various shades of blue with white clouds they did not have color calibration tools like we have today (like Pantone color charts). I adjusted those further out to be more accurate. The adjusted data (that is, the more accurate data) clearly shows that the sky in the past was maroon and the clouds gold. The current blue / white scheme is clearly a severe anomaly and we all need to make drastic changes to our lifestyles to restore it.
When do I pick up my Nobel Prize?
Thanks for the explanation Mosher. I think the slicing is ok but nor sure how it is affects things as I am not any better than any layman at reading a graph and interpreting data.
As for the graphs, they are what they are. Perhaps the BEST but we have but some who think otherwise. Time will tell as someone else commented.
I once wrote a financial program to predict company revenues based on a sum of all projects across several companies and projection of all the companies projects given their progress and cost to date. (And the shape of typical project cost completions curves for various size projects and components.) The employees called it the “Delbeke Lie Detector” because many would get calls from head office asking how much trouble their project was in before they recognized trouble as we could parametrize a few simple things and see when projects were going off the rails. They hated it and loved it at the same time and it helped corporate ensure we didn’t get too many financial surprises. Some folks would try to work the system, but they always failed in the end. I have been retired for many years but the system is still in place.
I hope Climate Science hasn’t fallen into the trap of trying to work the system ’cause the chickens always come home to roost in the long term.
Thanks to all the contributors to WUWT including MOSH for keeping my brain active.
There have been other published methods. Courtillot’s group, for instance, has papers that estimate the land average by selecting only stations with good data. The temperature curve takes a different shape, especially, IIRC, over Europe.
Secondly, the fallacy of calculating a point in a field from its neighbours, by whatever method, does not address the circularity involved, regardless of the superiority of the used method.
Steven Mosher says:
December 12, 2013 at 10:37 pm
Reg.
You like satellite data?
Uah stitches together various satellites by making adjustments to data. For example orbital decay.
And uah doesnt measure temparature. Its raw data is a voltage. This gets turned into a temperature by applying a physics model. That model is also the same model that says co2 warms the planet. I bet you thought uah was data. Its not. Its adjusted modelled outputs. Go read the theory behind satellite data.
+++++++++++
Actually Steve, you are artfully one of the finest verbal prestidigitators I’ve had the honor of conversing with. You’re brain is wicked smart.
The satellites (that you seem to poo poo) use the most precise type of electronic temperature sensors known to man. They are called RTDs. You are correct, as RTDs use the resistance measured across a platinum resistor which is excited by a known small current. Said another way, the raw data comes from measuring the resistance by applying precise current and measuring the voltage. There are three physical constants referred to as Alpha, Delta and Beta which reflect the properties of the platinum. As well, sensors are calibrated using for scale and offset before they are put into service. If a 3 or 4 wire RTD is used, the controller also needs to constantly measure the resistance of the leads to the RTD so that the resulting voltage measurement reflects the temperature. A 3 wire assumes both leads have the same resistance. A 4 wire measures resistance of both leads. The resistance of the leads is subtracted from the resistance of the RTD so that only the resistance of the sensor is used in the “calculation” of temperature.
Further, the current that flows through the platinum needs to be low enough such that it does not heat up the sensor significantly enough to hurt the measurement. The heating is now a days several orders of magnitude lower than the resulting reading.
So your attempt at obfuscating satellite sensors with your statement they the raw data is a voltage, is telling that you have said nothing towards addressing people’s questions.
Take what Janice clearly said about you to heart and prove her wrong.
I asked you some specific questions after summarizing what I think BEST does. You say things, but do not answer. Either: you do not know, but won’t say that or you do know, but do not want us knowing what you know. Why must you go through gyrations, like I think BEST does, to avoid cogent discussion.
I assume at this point that you know BEST starts with bad data (because it refuses to get rid of he poorly sited stations.) They slices and estimates after trusting that bad data should be used.