Holts,
Okay, my bad then. It just seems like whenever Mosher comes along everybody gets a hard on for some reason. It bugs me when people casually lump him in with the real scumbag data manipulators out there, apparently for no better reason than he’s annoying and says stuff people don’t want to hear sometimes.
ref the video from Mario of Mr Mosher
It is suggested by some, (far more than one), thinking, (they have written books and given lectures), older, (the information has been out there for many years), people that the brain subconsciously controls the eyes when thinking, the eyes look, in defined combination up, down , left, right, depending upon wether it is recalling fact or inventing fiction, obviously over written by extraneous forcing influences.
I know what I saw in the video on the first run through, a pretty clear cut well documented example.
regards
Anton Eagle
December 12, 2013 2:08 pm
This whole issue is really much simpler than most people are making of it.
Altering data is wrong. Period.
I work in healthcare… in a very technical sub-field of healthcare. In my field, errors in calculations and errors in commissioning systems will potentially kill people… in bad horrible ways. In such a field, altering data in any way is absolutely unheard of. Never done… except by the unethical few that are have some agenda (…cough… money) other than patient care. And they usually end up in jail or sued for everything they have.
In short, if I were to do even 1/10th of the contortions that we see with this kind of climate data, I would be fired and pretty much run out of my field for life… and deservedly so.
Feel free to throw out bad data if you wish… all scientific fields do this as needed. But don’t then go and guess what that data would be if it were “good”. If you can’t re-measure (which we can’t for historical time-series) then simply do the best you can with the data you have… but leave it unaltered. Period.
Why this has to be explained to “scientists” is beyond me. Anyone that publishes any climate article using anything other than raw data is, simply put, not a scientist. Instead, they are simply a propagandist.
MarkB
December 12, 2013 2:12 pm
Reg Nelson says:
December 12, 2013 at 1:55 pm
tumetuestumefaisdubien1 says:
December 12, 2013 at 1:21 pm
Reg Nelson says:
Satellites actually are unable to measure surface temperatures with accuracy the surfacestations do. So I’m not much sure what is the message?
—————
NASA claims the accuracy of the satellite measurements are within 0.03 C. Do you have evidence to suggest otherwise? Or are you saying that measurements taken in the 1920′s are more accurate than that?
The issue with satellite measurements isn’t so much the accuracy of the measurement but figuring out precisely what region of the atmosphere has been measured. The final product is separated from raw data by a lot of data processing.
One overview is presented here: http://www.remss.com/measurements/upper-air-temperature
NikFromNYC
December 12, 2013 2:15 pm
Hey, Steven Mosher, the series I just posted shows no peak in 2007 whatsoever, and there are no “empirical break points” added by BEST, yet the result at the bottom does show a huge spike in 2008. Is that not an “adjustment”?
Bill Illis
December 12, 2013 2:28 pm
What we need is a histogram of the breakpoints identified and pulled out.
For example, how many and what weight were the temperature decline breakpoints versus how many were temperature increase breakpoints.
I think I asked for this before and was told it was about the same for both but I haven’t seen the data.
We are talking about a huge number of breakpoints here; on average, about 8 per individual station.
Mosher writes “We dont adjust data. We identify breakpoints and slice.”
Its a valiant effort. But at the end of the day there is simply no substitute for a proper understanding of a temperature station that statistics simply cant supply. For example a tree growing near to a weather staion increasingly casts its shadow over the area and then one day gets cut down. Voila breakpoint. But without understanding the reality of the weather station environment how do you interpret that?
WeatherOrNot
December 12, 2013 2:43 pm
Raw data contains lots of crap. I maintain a well-sited weather station that sends data to the local NOAA automatically every 5 minutes. Despite my diligence in maintaining it there are still problems that interrupt it – power outages that disrupt the computer and console, occasional problems with the sensor suite, etc. I would guess that 1-3% of my data is inaccurate despite my efforts. I would also guess that there are similar problems all over the world, which is why raw data can be smelly. The key is to remove the smelliness in a methodical, objective manner that is free of political or other motivations. Isn’t that part of the reason for the existence of WUWT in the first place?
RoHa
December 12, 2013 2:46 pm
All that poster shows is US warming, so I assume the warming isn’t Global and 95.55% of us are O.K.
tobyglyn
December 12, 2013 2:48 pm
jono1066 says:
December 12, 2013 at 2:01 pm
“ref the video from Mario of Mr Mosher…”
Your comment would carry more weight if you had noticed that it is not Steven Mosher in the video. Instead it is Robert Rhode or perhaps Rohde depending on where you get your information 🙂
Anton Eagle This whole issue is really much simpler than most people are making of it.
Altering data is wrong. Period.
Spot on, Anton
If there is a suspicion that data is wrong, chuck it out.
If this means we have to admit we have gaps in the historic temperature record, man up and admit it.
dp
December 12, 2013 3:04 pm
Steve Mosher sez
We dont adjust data. We identify breakpoints and slice.
Then you abandon the data because it is crap, but apparently a nice starting point. And the result as we’ve seen is… wrong.
I do a lot of forensic work in the building industry. The computerized Heating Ventilation Air-Conditioning (HVAC) control systems (DDC or BAS) can trend critical data points which get used in the same way. Some people even remotely access the data and download trends without ever doing a sanity check on the sensors and their locations. No calibration checks, they are not even sure the sensor is located where its descriptor says it is.
I often encounter the DDC system taking 15 minute interval data (due to restrictive memory capacity) on a device that can cycle 100% in 3 minutes. Then the vendor wants the owner to invest great sums of money on the results.
Had one building where the air-handler economizer dampers were not positioned where they should be based on the temperatures being mixed. Traced the control cabling to a nearby J-box, found them coiled inside and never landed in the local control panel. They were installed 4 years prior.
Outside air sensors are probably the most difficult to properly locate. Many designs will use a single sensor for operations at both the base and rooftop of high-rises, even though temperature can vary 15-degrees or more in some locations. I found one located on the south-facing wall of the penthouse, which was painted black.
Anytime I find conditions as those listed above, I have no faith in the controls, system operations, or past energy use. You essentially have to “reset” the building, then wait a year or two to start getting good data to work with.
We can do that with the planet when we get time travel? Go back and install sensors where we want them with accuracy needed…:)
Steven Mosher says:
December 12, 2013 at 1:39 pm
Mario.
Are we headed for doom. Dunno.
Our goal was simple. Collect and use all the data.
Use methods first suggested by skeptics.
Show everything we did.
Doomsaying is above my paygrade
+++++++++++
Thank you for responding Steven. I’m impressed by the BEST study, but need to read it more closely. Something that bothered me a bit was that (if I recall) was that it confirmed in some people’s minds that the UHI did not really affect the resulting temperatures. I tend to believe that the UHI does in fact skew the numbers a bit higher than they otherwise would have been.
I’ve read your commentary on the IPCC a few times, and believe you have a balanced view on some of their science.
jono1066 says:
December 12, 2013 at 2:01 pm
ref the video from Mario of Mr Mosher
+++++++++
That video, as tobyglyn pointed out, was of Mr. Rohde, not Steven Mosher. (I mistakenly added an “s” to the end of his last name). I would be curious to read what you think of the shifty eyes… I believe the gestures have to do with something being digested or not. Mr Rohde looked uncomfortable to me.
I do not believe he made a cogent case that CO2 was the cause of the brief warming that stopped last century.
Mosh,
It was really pleasant to finally meeting you after years of blog dialog.
Thanks for explaining how your poster differentiates your focus from the approaches of other GASTA data sets.
Meeting blog commenting associates in person reminds one that we are all real people not disembodied words. It softens the tone of future comments.
John
Steven Mosher says:
December 12, 2013 at 1:39 pm
…Doomsaying is above my paygrade.
===============================================================
A good line. I chuckled.8-)
As a layman, I’m not sure what you mean by, “We dont adjust data. We identify breakpoints and slice. Then we estimate a field.
I’m tempted to say “A Field of Dreams” but I honestly don’t know what you mean.
To all – I am soliciting criticism here. Please be brutal.
The BEST explanation states and I quote “The conclusion of the three groups [NOAA, NASA GIS and HadCRU] is that the urban heat island contribution to their global averages is much smaller than the observed global warming.”
First – that statement, this century, cannot even be honestly made since there is no global warming being measured this century –but I will get past that for now.
They (the three groups) make tiny adjustments through various listed (in BEST summary) methods. But when more simple analysis is done, the rural areas show a range from very little warming to no warming to slight cooling over the time periods studied, while the urban areas show significant warming (through the end of last century).
To me, this does not pass the smell test.
The Best (not BEST) thing to do is Not simply trust the adjustments made to bad data (I understand BEST says they do not actually adjust data – but they do say that the raw data is “crap” so I agree we can call it bad data). The summary at this point in my narrative is that “crap” data with “trusted” adjustments are “sliced” at identified “breakpoints” and conclusions are made. Do we agree so far?
The value in the above conclusions by BEST based on data which begins as “crap” that gets adjusted by three different (trusted?) sources, and for value-added BEST science, is then sliced and estimated so it can be served with conclusions that include “CO2 accounts for the warming”, can not be very good in my opinion.
Please help me understand where I am being ignorant here.
James Allison
December 12, 2013 5:06 pm
Steven Mosher says:
December 12, 2013 at 12:18 pm
There are no adjustments.
There is the raw data if you like crap.
There is qc data
There is breakpoint data.
Then there is the estimated field.
We dont adjust data. We identify breakpoints and slice.
Then we estimate a field.
——————————————————–
For the benefit of the ignorant among us (especially me) would you kindly post your explanation about how this all works?
Seriously.
Mark Bofill
December 12, 2013 5:13 pm
That’s more like it. I’m not saying Mosher isn’t full of it, I’ve got no idea on that score and he very well might be, just that I don’t doubt his basic integrity.
Add me to the list of those who’d love to hear how BEST works.
Richard M
December 12, 2013 5:20 pm
There are innumerable ways to *adjust* the data. The chances of any particular method being correct is as close to zero as one can get. This is not unlike the problem with climate models. They are all wrong, we just don’t know which are the wrongest. I would much rather trust the law of large numbers than any other approach.
Personally, I would generate many random views of the data. From this one could get a feel for the range of possibilities hidden within the complete set.
In any event the raw data should always be shown together with the adjusted data. It is simply good form.
jorgekafkazar
December 12, 2013 5:36 pm
Reg Nelson says: You can put lipstick on pig, but at the end of the day it’s still a pig.
But will it fly?
Holts,
Okay, my bad then. It just seems like whenever Mosher comes along everybody gets a hard on for some reason. It bugs me when people casually lump him in with the real scumbag data manipulators out there, apparently for no better reason than he’s annoying and says stuff people don’t want to hear sometimes.
Corrected link…
Yet BEST data that show no breakpoints show great adjustment:
http://berkeleyearth.lbl.gov/stations/160013
ref the video from Mario of Mr Mosher
It is suggested by some, (far more than one), thinking, (they have written books and given lectures), older, (the information has been out there for many years), people that the brain subconsciously controls the eyes when thinking, the eyes look, in defined combination up, down , left, right, depending upon wether it is recalling fact or inventing fiction, obviously over written by extraneous forcing influences.
I know what I saw in the video on the first run through, a pretty clear cut well documented example.
regards
This whole issue is really much simpler than most people are making of it.
Altering data is wrong. Period.
I work in healthcare… in a very technical sub-field of healthcare. In my field, errors in calculations and errors in commissioning systems will potentially kill people… in bad horrible ways. In such a field, altering data in any way is absolutely unheard of. Never done… except by the unethical few that are have some agenda (…cough… money) other than patient care. And they usually end up in jail or sued for everything they have.
In short, if I were to do even 1/10th of the contortions that we see with this kind of climate data, I would be fired and pretty much run out of my field for life… and deservedly so.
Feel free to throw out bad data if you wish… all scientific fields do this as needed. But don’t then go and guess what that data would be if it were “good”. If you can’t re-measure (which we can’t for historical time-series) then simply do the best you can with the data you have… but leave it unaltered. Period.
Why this has to be explained to “scientists” is beyond me. Anyone that publishes any climate article using anything other than raw data is, simply put, not a scientist. Instead, they are simply a propagandist.
Reg Nelson says:
December 12, 2013 at 1:55 pm
tumetuestumefaisdubien1 says:
December 12, 2013 at 1:21 pm
Reg Nelson says:
Satellites actually are unable to measure surface temperatures with accuracy the surfacestations do. So I’m not much sure what is the message?
—————
NASA claims the accuracy of the satellite measurements are within 0.03 C. Do you have evidence to suggest otherwise? Or are you saying that measurements taken in the 1920′s are more accurate than that?
The issue with satellite measurements isn’t so much the accuracy of the measurement but figuring out precisely what region of the atmosphere has been measured. The final product is separated from raw data by a lot of data processing.
One overview is presented here: http://www.remss.com/measurements/upper-air-temperature
Hey, Steven Mosher, the series I just posted shows no peak in 2007 whatsoever, and there are no “empirical break points” added by BEST, yet the result at the bottom does show a huge spike in 2008. Is that not an “adjustment”?
What we need is a histogram of the breakpoints identified and pulled out.
For example, how many and what weight were the temperature decline breakpoints versus how many were temperature increase breakpoints.
I think I asked for this before and was told it was about the same for both but I haven’t seen the data.
We are talking about a huge number of breakpoints here; on average, about 8 per individual station.
Mosher writes “We dont adjust data. We identify breakpoints and slice.”
Its a valiant effort. But at the end of the day there is simply no substitute for a proper understanding of a temperature station that statistics simply cant supply. For example a tree growing near to a weather staion increasingly casts its shadow over the area and then one day gets cut down. Voila breakpoint. But without understanding the reality of the weather station environment how do you interpret that?
Raw data contains lots of crap. I maintain a well-sited weather station that sends data to the local NOAA automatically every 5 minutes. Despite my diligence in maintaining it there are still problems that interrupt it – power outages that disrupt the computer and console, occasional problems with the sensor suite, etc. I would guess that 1-3% of my data is inaccurate despite my efforts. I would also guess that there are similar problems all over the world, which is why raw data can be smelly. The key is to remove the smelliness in a methodical, objective manner that is free of political or other motivations. Isn’t that part of the reason for the existence of WUWT in the first place?
All that poster shows is US warming, so I assume the warming isn’t Global and 95.55% of us are O.K.
jono1066 says:
December 12, 2013 at 2:01 pm
“ref the video from Mario of Mr Mosher…”
Your comment would carry more weight if you had noticed that it is not Steven Mosher in the video. Instead it is Robert Rhode or perhaps Rohde depending on where you get your information 🙂
Anton Eagle
This whole issue is really much simpler than most people are making of it.
Altering data is wrong. Period.
Spot on, Anton
If there is a suspicion that data is wrong, chuck it out.
If this means we have to admit we have gaps in the historic temperature record, man up and admit it.
Steve Mosher sez
Then you abandon the data because it is crap, but apparently a nice starting point. And the result as we’ve seen is… wrong.
I do a lot of forensic work in the building industry. The computerized Heating Ventilation Air-Conditioning (HVAC) control systems (DDC or BAS) can trend critical data points which get used in the same way. Some people even remotely access the data and download trends without ever doing a sanity check on the sensors and their locations. No calibration checks, they are not even sure the sensor is located where its descriptor says it is.
I often encounter the DDC system taking 15 minute interval data (due to restrictive memory capacity) on a device that can cycle 100% in 3 minutes. Then the vendor wants the owner to invest great sums of money on the results.
Had one building where the air-handler economizer dampers were not positioned where they should be based on the temperatures being mixed. Traced the control cabling to a nearby J-box, found them coiled inside and never landed in the local control panel. They were installed 4 years prior.
Outside air sensors are probably the most difficult to properly locate. Many designs will use a single sensor for operations at both the base and rooftop of high-rises, even though temperature can vary 15-degrees or more in some locations. I found one located on the south-facing wall of the penthouse, which was painted black.
Anytime I find conditions as those listed above, I have no faith in the controls, system operations, or past energy use. You essentially have to “reset” the building, then wait a year or two to start getting good data to work with.
We can do that with the planet when we get time travel? Go back and install sensors where we want them with accuracy needed…:)
Steven Mosher says:
December 12, 2013 at 1:39 pm
Mario.
Are we headed for doom. Dunno.
Our goal was simple. Collect and use all the data.
Use methods first suggested by skeptics.
Show everything we did.
Doomsaying is above my paygrade
+++++++++++
Thank you for responding Steven. I’m impressed by the BEST study, but need to read it more closely. Something that bothered me a bit was that (if I recall) was that it confirmed in some people’s minds that the UHI did not really affect the resulting temperatures. I tend to believe that the UHI does in fact skew the numbers a bit higher than they otherwise would have been.
I’ve read your commentary on the IPCC a few times, and believe you have a balanced view on some of their science.
jono1066 says:
December 12, 2013 at 2:01 pm
ref the video from Mario of Mr Mosher
+++++++++
That video, as tobyglyn pointed out, was of Mr. Rohde, not Steven Mosher. (I mistakenly added an “s” to the end of his last name). I would be curious to read what you think of the shifty eyes… I believe the gestures have to do with something being digested or not. Mr Rohde looked uncomfortable to me.
I do not believe he made a cogent case that CO2 was the cause of the brief warming that stopped last century.
Janice: Your comments always brighten my day/night.
Bloke down the pub says (December 12, 2013 at 10:31 am): “Look forward to hearing how it matches up to USCRN.”
On a related topic, did Anthony ever find the time to set up his CRN-based “New national temperature resource”?
http://wattsupwiththat.com/2012/10/12/new-national-temperature-resource-almost-ready/
I looked in the WUWT reference pages and didn’t find it.
Also, was there ever a followup to
http://wattsupwiththat.com/2013/02/14/the-monthly-report-noaa-never-produces-from-the-climate-reference-network/
I’m dying to learn the reason(s) for the summer/winter differences between COOP & CRN “average temperatures”.
Mosh,
It was really pleasant to finally meeting you after years of blog dialog.
Thanks for explaining how your poster differentiates your focus from the approaches of other GASTA data sets.
Meeting blog commenting associates in person reminds one that we are all real people not disembodied words. It softens the tone of future comments.
John
===============================================================
A good line. I chuckled.8-)
As a layman, I’m not sure what you mean by, “We dont adjust data. We identify breakpoints and slice.
Then we estimate a field.
I’m tempted to say “A Field of Dreams” but I honestly don’t know what you mean.
To all – I am soliciting criticism here. Please be brutal.
The BEST explanation states and I quote “The conclusion of the three groups [NOAA, NASA GIS and HadCRU] is that the urban heat island contribution to their global averages is much smaller than the observed global warming.”
First – that statement, this century, cannot even be honestly made since there is no global warming being measured this century –but I will get past that for now.
They (the three groups) make tiny adjustments through various listed (in BEST summary) methods. But when more simple analysis is done, the rural areas show a range from very little warming to no warming to slight cooling over the time periods studied, while the urban areas show significant warming (through the end of last century).
To me, this does not pass the smell test.
The Best (not BEST) thing to do is Not simply trust the adjustments made to bad data (I understand BEST says they do not actually adjust data – but they do say that the raw data is “crap” so I agree we can call it bad data). The summary at this point in my narrative is that “crap” data with “trusted” adjustments are “sliced” at identified “breakpoints” and conclusions are made. Do we agree so far?
The value in the above conclusions by BEST based on data which begins as “crap” that gets adjusted by three different (trusted?) sources, and for value-added BEST science, is then sliced and estimated so it can be served with conclusions that include “CO2 accounts for the warming”, can not be very good in my opinion.
Please help me understand where I am being ignorant here.
Steven Mosher says:
December 12, 2013 at 12:18 pm
There are no adjustments.
There is the raw data if you like crap.
There is qc data
There is breakpoint data.
Then there is the estimated field.
We dont adjust data. We identify breakpoints and slice.
Then we estimate a field.
——————————————————–
For the benefit of the ignorant among us (especially me) would you kindly post your explanation about how this all works?
Seriously.
That’s more like it. I’m not saying Mosher isn’t full of it, I’ve got no idea on that score and he very well might be, just that I don’t doubt his basic integrity.
Add me to the list of those who’d love to hear how BEST works.
There are innumerable ways to *adjust* the data. The chances of any particular method being correct is as close to zero as one can get. This is not unlike the problem with climate models. They are all wrong, we just don’t know which are the wrongest. I would much rather trust the law of large numbers than any other approach.
Personally, I would generate many random views of the data. From this one could get a feel for the range of possibilities hidden within the complete set.
In any event the raw data should always be shown together with the adjusted data. It is simply good form.
Reg Nelson says: You can put lipstick on pig, but at the end of the day it’s still a pig.
But will it fly?