Guest essay by Rud Istvan
The disclosures by Dr. Bates concerning Karl’s ‘Pausebuster’ NOAA NCEI paper have created quite the climate kerfuffle, with Rep. Smith even renewing his NOAA email subpoena demands. Yet the Karl paper actually is fairly innocuous by comparison to other NOAA shenanigans. It barely removed the pause, and still shows the CMIP5 models running hot by comparison. Its importance was mainly political talking point pause-busting in the run up to Paris.
Here is an example of something more egregious but less noticed. It is excerpted from much longer essay When Data Isn’t in ebook Blowing Smoke. It is not global, concerning only the continental United States (CONUS). But it is eye opening and irrefutable.
NOAA’s USHCN stations are used to create the US portion of GHCN. They are also used to create state-by-state temperature histories accessible on the NOAA website. A 2011 paper[1] announced that NOAA would be transitioning to updated and improved CONUS software around the end of 2013. The program used until the upgrade was called Drd964x. The upgrade was launched from late 2013 into 2014 in two tranches. Late in 2013 came the new graphical interfaces, which are an improvement. Then about February 2014 came the new data output, which includes revised station selection, homogenization, and gridding. The new version is called nClimDiv.
Here are three states. First is Maine, with the before/after data both shown in the new graphical format.
Second is Michigan, showing the graphical difference from old to new software.
And finally, California.
In each state, zero or very slight warming was converted to pronounced warming.
One natural question might be whether upgraded homogenization (among other things ‘removing’ urban heat island (UHI) effects) is responsible? No from first principles, because the NOAA/NASA UHI policy is to warm the past so that current temperatures correspond to current thermometers (illustrated using NASA GISS Tokyo in the much longer book essay). This might be appropriate in California, whose population more than doubled from 1960 to 2010 (138%) with current density ~91 people/km2. Maine represents a similar ocean/mountain state, but is much more rural. Maine’s population grew by only a third (34%) from 1960 to 2010, and its current population density is just 16.5 people/km2. Maine should not have the same need for, or degree of, homogenization adjustment. Without the newest version of the US portion of GHCN, Maine would have no warming; its ‘AGW’ was manufactured by nClimDiv.
It is possible albeit tiresome to analyze all 48 CONUS states concerning the transition from Drd964x to nClimDiv. NOAA gave 40 out of 48 states ‘new’ AGW. The Drd964x decadal CONUS warming rate from 1895 to 2012 was 0.088F/decade. The new nClimDiv rate from 1895 to 2014 is 0.135F/decade, almost double. Definitely anthropogenic, but perhaps not actual warming.
[1] Fennimore et. al., Transitioning…, NOAA/NEDIS/NCDC (2011) available at ftp.ncdc.noaa.gov/pub/data/cmb/GrDD-Transition.pdf
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
OFF TOPIC
Watch Steve Goddard – Tony Heller testify before Washington State Senate Committee
http://www.tvw.org/watch/?eventID=2017021106
One natural question might be whether upgraded homogenization (among other things ‘removing’ urban heat island (UHI) effects) is responsible?
Obviously not – without any adjustments, Urban Heat effect is expected to cause a warming trend. A warming trend can not be removed by adding a warming trend.
Science or Fiction February 7, 2017 at 2:19 pm: “(among other things ‘removing’ urban heat island (UHI) effects)”
WR: Correct. The problem is the anthropogenic UHI effect itself. That effect has to be removed. When you take out this ‘anthropogenic local warming’, (better) comparable temperatures will remain. At least, those remaining temperatures will be less a reflection of the ‘local’ (urban) circumstances and they will be reflecting in a more correct way the regional circumstances.
The same with the buoys/ships’ measurements: the anthropogenic ship measurement anomalies have to be corrected in case you want the remaining temperatures to represent the reality of the Earth. Never correct the well calibrated buoy measurements because they are already showing reality.
If I correct a measurement In my profession, I will first have to prove that the original measurement is wrong, I will then have to put forward a scientific argument for the correction and then provide both the uncorrected data and the corrected data together with the difference between them.
I would end up behind bars if I treated data and corrections the way NOAA has.
Ship-borne calibration experiments show the Argo buoy SS temperatures have systematic errors of 0.5 C to 2 C.
Ship borne temperature measurements don’t impart a warming bias. /sarc
At Pat Frank
Inlet sea water pipes of varying length transiting engine rooms of varying temperatures, some pipes well insulated,some poorly, temperature sensors calibrated or not as the readings are non-critical 99% of the time. The buoys might be hooey but the ships are selected as a substitute for the soul purpose of reading higher. More pretend science in a field rife with corruption and politicization…
Science or Fiction February 7, 2017 at 3:39 pm
“If I correct a measurement In my profession, I will first have to prove that the original measurement is wrong, I will then have to put forward a scientific argument for the correction and then provide both the uncorrected data and the corrected data together with the difference between them.”
WR: And I suppose that every time you would present your results (e.g. a graphic) based on mutated data you would have to say that you did change the data and you will have to include a source where people can find the explanation about why and how you changed the data. Every time you present the results. Without that your work would be seen as not-scientific and if you would give but even the suggestion that it would be ‘scientific’ (for example by the corresponding press release in which you point at the scientists involved) your work would be seen as ‘misleading’. Because we have to represent ‘reality’. That is how I learned it.
The whole “global temperature” business is a figment of someones imagination, which is being used for a political purpose. It is impossible to take local temperature readings, (useful only for locals), merge them with other local temperature readings, making many adjustments to the original readings along the way, to finally produce an ACCURATE global average temperature – no ifs, no buts, it is impossible.
The temperature, elevation, humidity, and wind must all be accounted for, the objective being to determine heat flux all over the globe. It’s an impossible task.
Missing a boat here- temperature is only a broad, highly inaccurate estimate of the energy in a system, particularly something as varied and variable as the earth. Averaging temperatures does not give a thermodynamic energy average. Furthermore, the climate is a heat engine and responds to the energy differences, down to millimeter scales according to very well-established principles. A major reason why climate models have failed, besides the built-in biases, is that it is impossible to actually model the processes at a sufficiently small scale, assuming it is possible to actually do that. Nobody talks about a millimeter grid scale for climate models.
Global average temperature is a useless measure for understanding the climate.
SciFi, the problem is worse since even rural stations are likely to reflect human interference in the microregional scales. Rural areas are not “natural.” They are frequently subjected to clearing for agricultural purposes, leading to effects tied to increased ground surface insolation, evaporation, etc. Also, as rural areas historically declined economically over the last century, you might expect that vegetation would return, shifting temperature data in a different direction. I know this has happened in the northeast. It takes just one hike with open eyes in upstate New York to realize that all the forest is “new.” The ruined stone walls mark formerly cleared pastures and fields. The USCRN is in reality the first US effort at a proper, scientific attempt to measure baseline “natural” phenomena in a “natural” environment.
I hope they make no mistakes with United States Climate Reference Network. If they make no changes, that network may be used for a well defined measurand the simpler the better – less questions less doubt. I have my doubts with “pairwise homogenization” – I wonder if that concept has been properly proven.
To “Ripshin” : thank you very much for the useful information you provided to me. You know the French, always fussy, and demanding more : I am still trying to find out where that “shenanigan” thing came from (by the way, my post graduate studies were on “Latin and Greek epigraphy”…
Whatever Mr. Anthony says, I know for certain that in the olden days, olive trees did not thrive in Paris, they bloom there now.
Try this :
Shenanigan = German + East Anglian: schinageln + nannicking = working wool = pulling wool (over someone’s eyes’) = to deceive
And this is a bad thing? François doth protest too much. The Eiffel Tower wasn’t there in “olden days” shall we blame it’s presence on global warming too? On second thought, the Eiffel Tower was constructed with carbon based energy (coal fired steel smelters). Probably best to tear it down, since like the Olive Trees, it’s a terrible sign of that terrible, terrible use of carbon based energy by mankind.
Call it ironic iron then:
Lol.That had to be said. Everything we are, not just the Eiffel Tower, but ultimately also our shoe soles, the keyboard of our computers and the food we eat we owe to the fossil energies. However, this also shows that the Scaremongerians regard climate warming only as a vehicle for something even bigger, namely the global world government. A world in which everything is governed by new, perceptive norms, and presumably renewable energies supply all the energy on the surface, but in the background there are still fossil energy. As described in Orwell’s 1984. The old rooms with new whitewash. A beautiful new facade and behind it the horror. And why all this? Well, Einstein should have said that not only the stupidity of men as a whole is infinite, but also the hunger for power.
Wrought ironic.
Sorry if this is a long link but there is a history of olives in Roman France.
I remember reading of olive pits found in R. Legion Garbage dumps.
The link below shows that there were olives in the south, thought perhaps not Paris itself
michael
https://books.google.com/books?id=1VZP3-RWH-UC&pg=PA178&lpg=PA178&dq=olives+in+roman+france&source=bl&ots=l4H3zzDTn7&sig=blJa14dI-XfhLV6sOHsELRhRIks&hl=en&sa=X&ved=0ahUKEwjXqbmWn__RAhVB5GMKHb6cACMQ6AEISDAK#v=onepage&q=olives%20in%20roman%20france&f=false
HA!
Rud ferrites out the cold rolled truth and Anthony provides riveting retorts, as François steels himself for more plain carbon facts!
“I know for certain that in the olden days, olive trees did not thrive in Paris, they bloom there now.”
Kind of the opposite of the farms in Greenland.
François February 7, 2017 at 2:21 pm: “I know for certain that in the olden days, olive trees did not thrive in Paris, they bloom there now.”
Paris surely will show an Urban Heat Island effect. Olive trees in Paris can be seen as a nice proof of the UHI. I suppose we don’t find olive trees in the region outside of Paris.
Olive trees are sensible for frost:
“Frost Prevention
What are the variables regarding frost damage?The olive fruit can be damaged at temperatures below 29ºF (-1.7ºC). Young olive trees and branches can be killed at temperatures below 22ºF (-5.5ºC) and mature trees can be killed at temperatures below 15ºF (-9.5ºC). These are not precise numbers because the damage varies according to the specific temperature at ground level around the tree, the duration of the cold spell, the olive variety, the age of the tree, and whether the trees have had a chance to harden off.”
Source: https://www.oliveoilsource.com/page/frost-prevention
I suppose olive trees are like oranges: they can thrive well for years, but one real frost can damage them all. The difference here might be that oranges are very sensible for frost and as I read above olive trees can withstand some frost. But not a very severe winter. Atlhough it must be said that also in a very severe winter the Urban Heat Island effect can make a difference of many degrees as the Oslo UHI experiment of 25. January 2007 demonstrates. Which might just be enough for the olive trees to survive: see http://www.climate4you.com/OsloUHI%2020070125.htm
So the olive trees do well in a protecting environment as the big city of Paris, filled up with heaters working all winter. Conditions without too much wind (buildings as wind breakers) will help the survival of the olives.
Russian olive can tolerate severe frost – no problem. Maybe you are referring to this cultivar. GK
Fancoise,
Did anyone think of bringing olive trees to Paris back in the olden days?
Many species of plant are transported and grown around the world. Many cites are much hotter than the surrounding countryside and trees, plants, flowers and bees are thriving under these conditions.
BBC NEWS | UK | Education | Wild parrots settle in suburbs
news.bbc.co.uk/2/hi/3869815.stm
6 Jul 2004 – But there were also parrots reported in inner-London, including … Escaped parakeets have been spotted nesting in this country since the 19th Century
So disappointing that we are subject to fake science.
Nice work, Rud. I hope others add more evidence of bad science. This sort of stuff needs publicity and investigation by the authorities. How can global warming be taken seriously when the systems of measurement and data processing are so rotten? Why are we seeing people try to defend such unacceptable procedures? This is not science. At best it is incompetence, at worst it is designed to deceive.
Well, my ebook has many other proven examples. Remember this quick post was one ot many examples from but one essay of many.
Perfect! We have found the A in GW and it is the same as the F in front of RAUD.
Does anyone still have the actual raw data for temperature measurements or did the agencies alter or destroy it? Is it publically available in one place anywhere?
This article reminds be of another WUWT article by Professor Robert Brown of Duke who wrote:
“there is absolutely no question that GISS and HadCRUT, at least, are at this point hopelessly corrupted.”
In this article, Professor Brown has some interesting observations about GISS and HadCRUT, as well as the urban heat index adjustments which he states have been made in the wrong direction.
https://wattsupwiththat.com/2015/08/14/problematic-adjustments-and-divergences-now-includes-june-data/
reminds “me’
World temperature has risen about 0.8 C since 1880 and we all believe that disaster is upon us? Like watching a worm wiggle and projecting it is going to jump over a house!! In every city daily temperature will vary much more by time of day and position in the city. We have been conned!!
LittleOil, To your point: http://therightinsight.org/A-Little-Perspective
Firetoice, Thanks for excellent reference. Will mention you to my big bro.
According to the WMO method (average of 3 main surface temp. data sets), the total warming since 1880 is 0.94C. That takes us to ~1.0 C above the long term average, estimated to have been 14C; so we are currently at ~15C.
By ‘long term average’ I mean temperatures during the Holocene, which are estimated to have been fairly steady until recently:
On that scale, the thick black line is now roughly at 1.0 C on the vertical axis. Whether or not this leads to ‘disaster’ remains to be seen.
Some believe. Too bad the vast majority of papers on the subject have shown that the Holocene has not had a steady temperature. Ever.
There is no “world temperature”.
The “World Temperature” is an average of local temperatures. It can be done, but sometimes leads to anomalous results.
The average adult human has 1 boob, 1 ball, and 2 1/2 kids.
If we consider unadjusted data from long term continental U.S. stations, there is no warming over the last century. link
Would agree then that UHI shouldn’t be taken into account when considering US temperature station data?
Surprisingly, there is little difference between rural and non-rural long-running stations. In particular, see Figure 3 in the linked article. Based on that, I take no position on UHI.
The impact of that is greater than the chronic corruption.
Each little skewing of datasets is useful. It raises the significance of research findings. All the findings raise funding.
But so what? Everyone tries to emphasise the important parts of their work. All research highlights the most exciting possibility – regardless of probability.
But most academic funding is restricted to its own level. A bit here, a bit there… but no real change in the total pie to be nibbled. Little differences help but don’t change the game.
Trying to influence national policies… That’s a different league.
MC, leaving for dinner. But agree wholeheartedly. This post was my little effort to push back more publicly than the book. You might like it if you have not yet read it.
I’m glad that the effects of NOAAs “homogenization” of station records are finally getting the public attention they truly deserve. Several years ago, even before the advent of the egregious adjustments made in GHCN3, I snarked on Climate Audit that they should be called “pasteurization,” because they clearly cooked the books. Nothing done by an administration that complained loudly about a “Republican war on science” ever changed my evaluation.
If a particular temperature data set contains a known bias, whether warming or cooling, do folks here agree in principle that it should be corrected for?
Pray tell, by what means are the numerical values of various biases in a station record “known” with enough accuracy to provide a reliable correction?
For instance, if a temperature station is located somewhere that has recently become built up and starts to show warming that disagrees with nearby stations that remain rural, then it seems reasonable to assume that the warming is most likely attributable to the development around the site in question.
I think that should be adjusted for. Do you not?
The method I would use (as very much a non-expert), would be to reduce the temperature of the affected site by the average of the difference of the nearby rural sites. Otherwise my regional dataset would retain a warm bias.
While it may be certainly “reasonable to assume that the warming is most likely attributable to the development around the site in question,” it’s the lack of accuracy in the determination of that bias that is critical. In practice, you’ll seldom find nearby, certifiably “rural” sites that agree with one another sufficiently closely over the entire record span to provide a reliable correction. Such practical exigencies militate for disqualifying all UHI-corrupted records, rather than attempting to “correct” them.
1sky1
So is you suggestion that we simply leave known biases uncorrected for?
DWR54 February 7, 2017 at 3:51 pm
“I think that should be adjusted for. Do you not?”
I do not. Once a station begins to show an effect that is not purely natural its data is suspect, period. It should be removed from the data set until it can be relocated to an unaffected spot. Nobody has any idea what the correct temperature reading would/should have been… Averaging temperatures of surrounding stations cannot be accepted either, as wind direction and wind speed, humidity, etc., all play a part in what the recorded temperature would have been. Baloney.
The best approach would be to remove the impact (tear down the buildings, plant trees, etc.) then make the measurement and after that critical and important task those unnatural surroundings can be installed again. Until another measurement is needed. /s just in case.
DWR54 February 7, 2017 at 5:09 pm:
That’s not even close to what 1sky1 said.
DWR54:
Look up the definition of “disqualify!” Disqualified records need no “correction,” since they have no impact upon results.
If a data set has a known bias, the equipment or enclosures should be replaced to eliminate the bias. If the data set is “adjusted”, it is no longer a data set, but rather an estimate set. The estimate might be better than the biased data, but it is still an estimate.
Climate science is a science which requires very careful calibration, installation and maintenance of measuring equipment, since it is not possible to rerun the experiment. It is possible, but highly questionable, to “fudge” the data, no less to make it up out of “whole cloth”.
Are you saying that it is necessary to relocate a temperature station every time a tree grows or sheds leaves, every time a road is built, every time trees are cut down or grow up, every time nearby buildings are erected or demolished, etc?
That sounds expensive. Surely it is within the wit of man to allow for these things and adjust for them.
No, he’s not saying that DWR54. Goodness gracious. Please tell me your absurd hyperbole was just pathetic rhetoric and that you really aren’t that stupid.
Look at his response to 1sky1 above.
Inability to respond to what others are actually saying is one of his most endearing features.
I completely disagree with adjusting for a “known” bias.
In ERSST V3b introduction, Karl justified getting rid of the satellite sea surface temperature measurements because “the addition of satellite data led to residual biases” with not providing a single shred of evidence that this was actually the case. $billions wasted.
In ERSST V4 introduction, Karl justified adjusting the buoys to the ship engine intakes due to “Buoy SSTs have been adjusted toward ship SSTs in ERSST.v4 to correct for a systematic difference of 0.12°C between ship and buoy observations.” BUT there was simply NOT a bias in the buoys versus ships over the whole time frame in question and the Hausfather 2017 comes along and said the buoys were actually measuring exactly the same as the ships. Throw out the $billions wasted on the buoys which never did show any colling bias.
I don’t know if anyone remembers “the cooling bias” introduced by the new MMTS sensors or the “cooling bias” in the new XBT floats or Phil Jones and the non-existent “0.05C UHI bias” which was not worth adjusting for …
But the word BIAS, just allows the NCDC’s and the Karl’s to adjust the temperature record up again even when they have ZERO proof of the need for one.
The ONLY adjustment in history that was done carefully and with many different test measurements around the world was the “bucket adjustment” for pre-1940 canvas and wooden buckets which cooled off the sea water by 1.0C before it could be measured by a thermometer.
I think we just go back to the RAW record only and note there are various changes in instruments over time and there was this TOBs thing which probably doesn’t impact the trend and just be done with it. Leave all the records alone after this.
“I completely disagree with adjusting for a “known” bias.”
Then how could we distinguish a real deviation from ‘normal’ from a spurious one?
Bill Illis February 7, 2017 at 4:17 pm “I think we just go back to the RAW record only and note there are various changes in instruments over time and there was this TOBs thing which probably doesn’t impact the trend and just be done with it. Leave all the records alone after this.”
WR: All that ‘homogenisation and adjusting’ of temperatures makes that I don’t trust any of the surface records. I am looking at UAH temperatures and so far RSS’. And yes, I also want our ‘raw data’ back. All of them.
Adjusting buoys up to ship data is just dumb. The buoys are designed specifically to take temperatures, the ship data is taken by eye balling a temp gauge INSIDE an engine intake. Engines inherantly heat things up, and then you have the human error of eyeballing a temperature gauge. If anything the ships data should be thrown out entirely or adjusted down to the buoys, not the other way around. Not to mention the fact that they didnt really start taking ocean temps until directly after the little ice age. Of course ocean temps would heat up from such a cold period in the earths history.
DWR54, yes ask yourself that question, “how could we distinguish a real deviation from ‘normal’ from a spurious one?
How to tell a sudden warm wind from a car or aircarft exhaust or air conditioner without ACTUALLY OBSERVING THE CONDITIONS AT THE TIME OF THE DEVIATION?
NASA/NCDC/GISS/BEST all make guesses about what happened in the past and that also includes TOBS.
DWR, @ur momisugly 3:51 pm Feb 7, I agree as long as those 3-4 stations are within an acceptable distance +/- 5 kms in a ” circular pattern”. frankly I think that would be nearly impossible ( I am a observer, not an expert either). In our case the next stations to ours are way further away and in hugely different “Micro” climates. To me your solution would have to involve leaving the compromised station in place, install 4 more sites around it some distance away in non compromised ( “rural”) locations and , over a period of time, take readings and then correct the results for the “tainted central site” ( Hey more jobs!! and again I am not an expert either).
This to me should not take long, the bias should show up relatively quick, say over a month we should have a pretty good idea what the anomalies are.
DWR54
The WMO sum it up best-
WMO – “The nature of urban environments makes it impossible to conform to the standard guidance for site selection and exposure of instrumentation required
for establishing a homogeneous record that can be used to describe the larger-scale climate”
DWR54
“If a particular temperature data set contains a known bias, whether warming or cooling, do folks here agree in principle that it should be corrected for?”
It’s very easy to salaami the temps up each year to get whatever temp you require with this method.
If many stations have such a “known” bias, it would be better to revert to satellite measurements and to develop the instruments better there. For satellites determine everything by given aera, nothing needs to be homogenized, interpolated and extrapolated. Perhaps the measurement has to be broken down to the atmospheric level, but the problem with the UHI and other affected sites is eliminated. I would therefore advocate making this type of temperature measurement better by awarding research funds so that it can finally replace the surface measurement.
1) How do you demonstrate that a dataset has a bias?
2) How do you calculate the exact level of this bias?
3) How do you adjust for this bias?
4) How do you adjust the error bars for this “bias” adjusting?
What is known to warmists is rarely actual.
Very interesting update from Mars Rover team: Climatologists cannot explain how liquid water could have possibly been in obvious abundance on Mars’ surface:
http://mars.jpl.nasa.gov/news/2017/nasas-curiosity-rover-sharpens-paradox-of-ancient-mars&s=2
Wow, the bias is strong. Dispite clear evidence of a lake, they conclude Mars couldn’t have been warm enough for liquid water because there wasn’t enough carbon dioxide to drive their model’s temperature up.
Did every climate scientist invest his or her pension in carbon credit futures?
One of the wonders of twitter is that you can reach out and touch someone … @KellyAnnePolls
Kelly Anne Conway
She pays attention to this stuff bc that’s her job … she v data intensive in her message making for the WH
Redundancy matters
Reach out and tell her
Also @parscale
Brad Parscale … sentiment data number cruncher
Public Service Announcement over
We need an open source temperature reconstruction. Way too much power is concentrated in the hands of activists masquerading as scientists.They start with a conclusion and work backward. There are the last people we should have in charge of anything that can impact public policy. They simply lack any credibility at all.
Climate “Science” on Trial; The Consensus is more Con and NonSense than Science
https://co2islife.wordpress.com/2017/01/29/climate-science-on-trial-the-consensus-is-more-con-and-nonsense-than-science/
Climate Bullies Gone Wild; Caught on Tape and Print
https://co2islife.wordpress.com/2017/01/22/climate-bullies-gone-wild-caught-on-tape-and-print/
“We need an open source temperature reconstruction.”
That’s exactly what BEST (now Berkeley Earth – BE) was set up to do and was endorsed to do by this very site. It went ahead and provided this reconstruction – all of its methods and results are open to public scrutiny. The BE reconstruction turned out to be in agreement with every other group that has ever looked into this issue.
You say we need ‘an open source temperature reconstruction’. The we get one and you don’t like the results. So again say “we need an open source temperature reconstruction”.
Let’s face it, you won’t be satisfied with any temperature reconstruction that disagrees with your beliefs.
Berkeley is the last place I would want this project to be run out of. Also, the BEST project uses existing data. I’m talking about reconstructing from the floor up. What good is Best if they use corrupted data sources?
co2islife
“Berkeley is the last place I would want this project to be run out of.”
How times change.
“What good is Best if they use corrupted data sources?”
They use raw data sources and publish these freely.
My belief is that we have had some warming coming out of the Little Ice Age, but that we have cooled since the Medieval Warming Period, the Roman Warming Period, the Minoan Warming Period, the Holocene Optimum, and most of the Eemian.
Small adjustments in the modern record don’t sway me. It is obviously provable that nothing happening since 1978, and the 1930s even, are remotely close to unprecedented.
DWR54, I suggest that you take a look at Best’s “Output Final Temperature”.
Just try comparing your local Weather Station Temperature dataset from your country’s Met Office with Best’s data.
Usually the raw data that best uses is the same as the data outpuf of your local Met Office (of course your local Met Office has has already done their own “Quality Control” on the data) and then compare the adjustments and “Final Output” with your original.
I have done this for many sites along with many other posters, what they are doing is not science and the Final Output is not REAL.
I suggest you also look at Valentia, a long term top class dataset and then tell me that the Irish Met office knows less about their own temps than Best.
Or perhaps the Iceland Temp sites, again the same thing.
Or perhaps you would like it from the horses mouth.
Steven Mosher | July 2, 2014 at 11:59 am |
“However, after adjustments done by BEST Amundsen shows a rising trend of 0.1C/decade.
Amundsen is a smoking gun as far as I’m concerned. Follow the satellite data and eschew the non-satellite instrument record before 1979.”
BEST does no ADJUSTMENT to the data.
All the data is used to create an ESTIMATE, a PREDICTION
“At the end of the analysis process,
% the “adjusted” data is created as an estimate of what the weather at
% this location might have looked like after removing apparent biases.
% This “adjusted” data will generally to be free from quality control
% issues and be regionally homogeneous. Some users may find this
% “adjusted” data that attempts to remove apparent biases more
% suitable for their needs, while other users may prefer to work
% with raw values.”
With Amundsen if your interest is looking at the exact conditions recorded, USE THE RAW DATA.
If your interest is creating the best PREDICTION for that site given ALL the data and the given model of climate, then use “adjusted” data.
See the scare quotes?
The approach is fundamentally different that adjusting series and then calculating an average of adjusted series.
in stead we use all raw data. And then we we build a model to predict
the temperature.
At the local level this PREDICTION will deviate from the local raw values.
it has to.
It really is pathetic how you keep beating that dead horse.
It doesn’t matter how many problems we find with BEST, because the effort was supported before hand, we are supposed to accept the results?
The fact that the lead author was caught in a flat out lie when he claimed to be a skeptic before starting this project should be all the evidence needed that something fishy was being done.
DWR54:
You assert
That is a misunderstanding. I object to political propaganda that pretends to be science.
All global and hemispheric temperature time series are complete bunkum: they are junk. This is explained in my post above and this link jumps to it.
For a more detailed explanation of the scandal which is global temperature time series then read this especially its Appendix B. Please note that this link is to an item in Hansard (i.e. the official record of UK Parliament) and is a submission I made to a Parliamentary Select Committee. If it were untrue then it would be a perjury that would have put me in jail.
Richard
DWR54 :
The link to Hansard did not work. This is probably because of medical problems that give me difficulty posting items. Sorry.
This is the URL that links to the item in Hansard.
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm
Richard
““We need an open source temperature reconstruction.”
That’s exactly what BEST (now Berkeley Earth – BE) was set up to do and was endorsed to do by this very site. It went ahead and provided this reconstruction – all of its methods and results are open to public scrutiny. The BE reconstruction turned out to be in agreement with every other group that has ever looked into this issue.”
Yeah, Zeke Hausfather, and Berkeley Earth say they confirm Karl’s “Pausebuster” paper. So if the pausebuster paper is wrong, what does that make BEST?
And also a possible explanation for why the surface now runs hotter than the troposphere, which is opposite to theoretical predictions. O, what a tangled web we weave when first we practise to deceive!
According to the latest TTT data (v4.0) from RSS there has been no statistically significant difference between the rates of warming observed between surface and atmosphere.
Ha ha ha…..
It is really hard to imagine why the TTT data, has such an increased trend compared to the TLT, TMT, and TTS measurements from RSS. TTT should essentially be the average of these other three but for some reason it is way higher.
Bill Illis
“It is really hard to imagine why the TTT data, has such an increased trend compared to the TLT, TMT, and TTS measurements from RSS. TTT should essentially be the average of these other three but for some reason it is way higher.”
Bill, you’re overlooking the fact that the current RSS TTT data is based on the revised and peer reviewed version 4. The current TLT data is based on version 3, which RSS chief scientist carl Mears says contains a known cooling bias (from the stratosphere).
There is no significant warming since the 1930s.
CO2 has risen since the 1930s, but temperatures are basically flat.
CO2 is ergo not the driver of temperatures.
Bill Illis February 7, 2017 at 4:23 pm
It is really hard to imagine why the TTT data, has such an increased trend compared to the TLT, TMT, and TTS measurements from RSS. TTT should essentially be the average of these other three but for some reason it is way higher.
TTT = 1.1*TMT – 0.1*TLS
About 10% of the TMT comes from the lower stratosphere which is reduced by the subtraction of the suitably weighted lower stratosphere TLS. Since TLS is decreasing at ~0.26 K/decade TTT is higher than TMS.
JRA-55 says RSS TLT is fine
There has been no cooling of the LS since 1995.
http://onlinelibrary.wiley.com/doi/10.1002/2015JD024039/full
Not talking about sat data. The topic is about land surface measurement data. People confuse easily don’t they…
The future is known. Only the past is uncertain.
At 1:21, RWTurner asks about the ice coverage in the Antarctic. Was at 2 deviations above, the data has a problem and plots were erratic. Then shut down. Start up and then the plot goes to 2 deviation below. The swing seems beyond what the El Nino could force, which has been bugging me.
Any comments?
I have been saying this on many Forums, How could both Poles suddenly change, everyone knows they are opposed in Ice growth and loss.
A C Osborn on February 8, 2017 at 7:05 am
…everyone knows they are opposed in Ice growth and loss.
Sure you will be right when considering the sea ice extent readings by satellites during the whole period.
But the actual situation differs a bit:
http://fs5.directupload.net/images/170209/pd3y4ua8.jpg
Sources:
– ftp://sidads.colorado.edu/DATASETS/NOAA/G02135/north/daily/data/
– ftp://sidads.colorado.edu/DATASETS/NOAA/G02135/south/daily/data/
Pretty good visible here too:
https://s3-us-west-1.amazonaws.com/www.moyhu.org/blog/polview.html
I have used two graphs from NOAA for several years now in my talks on climate. Unfortunately, I don’t know how to post them in the comments section. The first one displays the Michigan annual temps that I saved in 2012. The second I saved in 2014. The warming went from almost zero to .2 degree per decade or 2 degrees per century. Talk about data tampering!
That’s the whole “beauty” of agenda-driven adjustments made in GHCN3.
Another expose of corrupt temperature documentation has been published.
That adds to a growing list made by others:
Tony Heller
Pierre Gosselin
Paul Homewood
Anyone else to add to this list?
Good job,guys!
Sunshine Hours
Jo Nova
Donna Laframboise
Jennifer Marohasy
Ristvan
Jonathan Lowe (http://gustofhotair.blogspot.co.uk/)
Tom Nelson
kenskingdom (https://kenskingdom.wordpress.com/2014/07/16/the-australian-temperature-record-revisited-part-4-outliers/)
And there are probably many more that I can’t remember.
What about the twice daily balloons, what do they show up in the atmosphere?
Have those balloons found any warming up there?
Yes. Faster than the surface temperature data:
Tamino?
You went to the ultimate source!
Faster than 1910 to 1940? Faster than the exit from the last glacial?
Boy DWR I sure fell for your ” I am not an expert” comment @ur momisugly 3 51 pm Feb 7 since then you have shown quite a bit of “non expertise” my bad.
There is no global data for balloons. They only cover land for the most part and only over a small area of the planet. However, they can be used to verify satellite readings and that work has already been done. What those comparisons show is the satellites are quite accurate.
Richard M, thank you for the information. I remember back in the 1960’s some of the meteorologist’s, out of Portland Or, would include the free air freezing point from Salem Or. .
Larry on February 7, 2017 at 4:46 pm
There have been over 1,500 balloons working. They belong to the Integrated Global Radiosonde network.
How many of them are still active I don’t know.
A small but very representative IGRA subset, called RATPAC (existing in versions A and B), consists of 85 of them.
Here is a comparison of GISS, RATPAC B and UAH 6.0:
http://fs5.directupload.net/images/170209/ttgt5ivu.jpg
I choosed for the RATPAC plot the atmospheric pressure level of 700 hPa: it corresponds to an altitude of about 3 km, near to the place (3.7 km) where in theory (!) UAH satellite should perform their readings according to the averaged 264 K they measured in 2015.
Larry on February 7, 2017 at 4:46 pm [2]
But now, if you repeat the same exercise with another IGRA subset of 31 “US controlled” {sic} balloons, selected by Christy and Norris, you obtain this graph:
http://fs5.directupload.net/images/170209/s9n8h6jz.jpg
As you can see, the balloon trend moved from near GISS down to below UAH6.0 (but the pressure still is at 700 hPa).
28 of the 31 balloons operate in CONUS+AK. If you restrict the plot those these 28, the balloon trend moves above UAH’s.
So there seem to be these and those balloon radiosondes.
The subset I found in the paper:
Satellite and VIZ–Radiosonde Intercomparisons for Diagnosis of Nonclimatic Influences
JOHN R. CHRISTY AND WILLIAM B. NORRIS (2006)
No wonder that Richard M writes below:
What those comparisons show is the satellites are quite accurate.
My favorite astronomer appears to be in denial.
http://www.blastr.com/badastronomy/2017-2-6/sorry-climate-change-deniers-global-warming-pause-still-never-happened
Dr. Bates doesn’t need to make specific charges that will land him in a lawsuit dragging out for years like Mark Steyn and Michael Mann. His original post and follow up on Dr. Curry’s blog are very clear. Karl et al did not handle their data per NOAA practices and it is not falsifiable o
r reproducible because of archiving and software issues. Karl Popper is spinning in his grave.
I would bet that most people do not know that AGW is mainly based on computer modeling which changes inputs frequently. They would compare it to political polling data.
It is now on fox anyway
http://www.foxnews.com/science/2017/02/07/federal-scientist-cooked-climate-change-books-ahead-obama-presentation-whistle-blower-charges.html
michael
Yes! Top of main page, with 4 separate links to related stories!
This topic is getting ‘some legs’!
national review is covering it as well
http://www.nationalreview.com/article/444668/whistle-blower-scientist-exposes-shoddy-climate-science-noaa