Guest Post by Professor Robert Brown from Duke University and Werner Brozek, Edited by Just The Facts:

The above graphic shows RSS having a slope of zero from both January 1997 and March 2000. As well, GISS shows a positive slope of 0.012/year from both January 1997 and March 2000. This should put to rest the notion that the strong El Nino of 1998 had any lasting affect on anything. Why is there such a difference between GISS and RSS? That question will be explored further below.
The previous post had many gems in the comments. I would like to thank firetoice2014 for their comment that inspired the title of this article.
I would also like to thank sergeiMK for very good comments and questions here. Part of their comment is excerpted below:
“@rgb
So you are basically stating that all major providers of temperature series of either
1 being incompetent
2 purposefully changing the data to match their belief.”
Finally, I would like to thank Professor Brown for his response. With some changes and deletions, it is reproduced below and ends with rgb.
“rgbatduke
August 14, 2015 at 12:06 pm
Note well that all corrections used by USHCN boil down to (apparently biased) thermometric errors, errors that can be compared to the recently discovered failure to correctly correct for thermal coupling between the actual measuring apparatus in intake valves in ocean vessels and the incoming seawater that just happened to raise global temperatures enough to eliminate the unsightly and embarrassing global anomaly “Pause” in the latest round of corrections to the major global anomalies; they are errors introduced by changing the kind of thermometric sensors used, errors introduced by moving observation sites around, errors introduced by changes in the time of day observations are made, and so on. In general one would expect measurement errors in any
given thermometric time series, especially when they are from highly diverse causes, to be as likely to cool the past relative to the present as warm it, but somehow, that never happens. Indeed, one would usually expect them to be random, unbiased over all causes, and hence best ignored in statistical analysis of the time series.
Note well that the total correction is huge. The range is almost the entire warming reported in the form of an anomaly from 1850 to the present.
Would we expect the sum of all corrections to any good-faith dataset (not just the thermometric record, but say, the Dow Jones Industrial Average “DJIA”) to be correlated, with, say, the height of my grandson (who is growing fast at age 3)? No, because there is no reasonable causal connection between my grandson’s height and an error in thermometry. However, correlation is not causality, so both of them could be correlated with time. My grandson has a monotonic growth over time. So does (on average, over a long enough time) the Dow Jones Industrial Average. So does carbon dioxide. So does the temperature anomaly. So does (obviously) the USHCN correction to the temperature anomaly. We would then observe a similar correlation between carbon dioxide in the atmosphere and my grandson’s height that wouldn’t necessarily mean that increasing CO2 causes growth of children. We would observe a correlation between CO2 in the atmosphere and the DJIA that very likely would be at least partly causal in nature, as CO2 production produces energy as a side effect and energy produces economic prosperity and economic prosperity causes, among other things, a rise in the DJIA.
In Nicholas Nassim Taleb’s book The Black Swan, he describes the analysis of an unlikely set of coin flips by a naive statistician and Joe the Cab Driver. A coin is flipped some large number of times, and it always comes up heads. The statistician starts with a strong Bayesian prior that a coin, flipped should produce heads and tails roughly equal numbers of times. When in a game of chance played with a friendly stranger he flips the coin (say) ten times and it turns up heads every time (so that he loses) he says “Gee, the odds of that were only one in a thousand (or so). How unusual!” and continues to bet on tails as if the coin is an unbiased coin because sooner or later the laws of averages will kick in and tails will occur as often as heads or more so, things will balance out.
Joe the Cab Driver stopped at the fifth or sixth head. His analysis: “It’s a mug’s game. This joker slipped in a two headed coin, or a coin that it weighted to nearly always land heads”. He stops betting, looks very carefully at the coin in question, and takes “measures” to recover his money if he was betting tails all along. Or perhaps (if the game has many players) he quietly starts to bet on heads to take money from the rest of the suckers, including the naive statistician.
An alternative would be to do what any business would do when faced with an apparent linear correlation between the increasing monthly balance in the company presidents personal account and unexplained increasing shortfalls in total revenue. Sure, the latter have many possible causes — shoplifting, accounting errors, the fact that they changed accountants back in 1990 and changed accounting software back in 2005, theft on the manufacturing floor, inventory errors — but many of those changes (e.g. accounting or inventory) should be widely scattered and random, and while others might increase in time, an increase in time that matches the increase in time in the president’s personal account when the president’s actual salary plus bonuses went up and down according to how good a year the company had and so on seems unlikely.
So what do you do when you see this, and can no longer trust even the accountants and accounting that failed to observe the correlation? You bring in an outside auditor, one that is employed to be professionally skeptical of this amazing coincidence. They then check the books with a fine toothed comb and determine if there is evidence sufficient to fire and prosecute (smoking gun of provable embezzlement), fire only (probably embezzled, but can’t prove it beyond all doubt in a court of law, continue observing (probably embezzled, but there is enough doubt to give him the benefit of the doubt — for now), or exonerate him completely, all income can be accounted for and is disconnected from the shortfalls which really were coincidentally correlated with the president’s total net worth.
Until this is done, I have to side with Joe the Cab Driver. Up until the latest SST correction I was managing to convince myself of the general good faith of the keepers of the major anomalies. This correction, right before the November meeting, right when The Pause was becoming a major political embarrassment, was the straw that broke the p-value’s back. I no longer consider it remotely possible to accept the null hypothesis that the climate record has not been tampered with to increase the warming of the present and cooling of the past and thereby exaggerate warming into a deliberate better fit with the theory instead of letting the data speak for itself and hence be of some use to check the theory.
The bias doesn’t even have to be deliberate in the sense of people going “Mwahahahaha, I’m going to fool the world with this deliberate misrepresentation of the data”. Sadly, there is overwhelming evidence that confirmation bias doesn’t require anything like deliberate dishonesty. All it requires is a failure in applying double blind, placebo controlled reasoning in measurements. Ask any physician or medical researcher. It is almost impossible for the human mind not to select data in ways that confirm our biases if we don’t actively defeat it. It is as difficult as it is for humans to write down a random number sequence that is at all like an actual random number sequence (go on, try it, you’ll fail). There are a thousand small ways to make it so. Simply considering ten adjustments, trying out all of them on small subsets of the data, and consistently rejecting corrections that produce a change “with the wrong sign” compared to what you expect is enough. You can justify all six of the corrections you kept, but you couldn’t really justify not keeping the ones you reject. That will do it. In fact, if you truly believe that past temperatures are cooler than present ones, you will only look for hypotheses to test that lead to past cooling and won’t even try to think of those that might produce past warming (relative to the present).
Why was NCDC even looking at ocean intake temperatures? Because the global temperature wasn’t doing what it was supposed to do! Why did Cowtan and Way look at arctic anomalies? Because temperatures there weren’t doing what they were supposed to be doing! Is anyone looking into the possibility that phenomena like “The Blob” that are raising SSTs and hence global temperatures, and that apparently have occurred before in past times, might make estimates of the temperature back in the 19th century too cold compared to the present, as the existence of a hot spot covering much of the pacific would be almost impossible to infer from measurements made at the time? No, because that correction would have the wrong sign.
So even like this excellent discussion on Curry’s blog where each individual change made by USHCN can be justified in some way or another which pointed out — correctly, I believe — that the adjustments were made in a kind of good faith, that is not sufficient evidence that they are not made without bias towards a specific conclusion that might end up with correction error greater than the total error that would be made with no correction at all. One of the whole points about error analysis is that one expects a priori error from all sources to be random, not biased. One source of error might not be random, but another source of error might not be random as well, in the opposite direction. All it takes to introduce bias is to correct for all of the errors that are systematic in one direction, and not even notice sources of error that might work the other way. It is why correcting data before applying statistics to it, especially data correction by people who expect the data to point to some conclusion, is a place that angels rightfully fear to tread. Humans are greedy pattern matching engines, and it only takes one discovery of a four leaf clover correlated with winning the lottery to overwhelm all of the billions of four leaf clovers that exist but somehow don’t affect lottery odds in the minds of many individuals. We see fluffy sheep in the clouds, and Jesus on a burned piece of toast.
But they aren’t really there.
rgb”
In the sections below, as in previous posts, we will present you with the latest facts. The information will be presented in three sections and an appendix. The first section will show for how long there has been no warming on some data sets. At the moment, only the satellite data have flat periods of longer than a year. The second section will show for how long there has been no statistically significant warming on several data sets. The third section will show how 2015 so far compares with 2014 and the warmest years and months on record so far. For three of the data sets, 2014 also happens to be the warmest year. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.
Section 1
This analysis uses the latest month for which data is available on WoodForTrees.com (WFT). All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative on at least one calculation. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.
1. For GISS, the slope is not flat for any period that is worth mentioning.
2. For Hadcrut4, the slope is not flat for any period that is worth mentioning.
3. For Hadsst3, the slope is not flat for any period that is worth mentioning.
4. For UAH, the slope is flat since April 1997 or 18 years and 4 months. (goes to July using version 6.0)
5. For RSS, the slope is flat since January 1997 or 18 years and 7 months. (goes to July)
The next graph shows just the lines to illustrate the above. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the upward sloping blue line at the top indicates that CO2 has steadily increased over this period.

When two things are plotted as I have done, the left only shows a temperature anomaly.
The actual numbers are meaningless since the two slopes are essentially zero. No numbers are given for CO2. Some have asked that the log of the concentration of CO2 be plotted. However WFT does not give this option. The upward sloping CO2 line only shows that while CO2 has been going up over the last 18 years, the temperatures have been flat for varying periods on the two sets.
Section 2
For this analysis, data was retrieved from Nick Stokes’ Trendviewer available on his website <a href=”http://moyhu.blogspot.com.au/p/temperature-trend-viewer.html”. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.
On several different data sets, there has been no statistically significant warming for between 11 and 22 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.
The details for several sets are below.
For UAH6.0: Since November 1992: Cl from -0.007 to 1.723
This is 22 years and 9 months.
For RSS: Since February 1993: Cl from -0.023 to 1.630
This is 22 years and 6 months.
For Hadcrut4.4: Since November 2000: Cl from -0.008 to 1.360
This is 14 years and 9 months.
For Hadsst3: Since September 1995: Cl from -0.006 to 1.842
This is 19 years and 11 months.
For GISS: Since August 2004: Cl from -0.118 to 1.966
This is exactly 11 years.
Section 3
This section shows data about 2015 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadsst3, and GISS.
Down the column, are the following:
1. 14ra: This is the final ranking for 2014 on each data set.
2. 14a: Here I give the average anomaly for 2014.
3. year: This indicates the warmest year on record so far for that particular data set. Note that the satellite data sets have 1998 as the warmest year and the others have 2014 as the warmest year.
4. ano: This is the average of the monthly anomalies of the warmest year just above.
5. mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year.
6. ano: This is the anomaly of the month just above.
7. y/m: This is the longest period of time where the slope is not positive given in years/months. So 16/2 means that for 16 years and 2 months the slope is essentially 0. Periods of under a year are not counted and are shown as “0”.
8. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.
9. sy/m: This is the years and months for row 8. Depending on when the update was last done, the months may be off by one month.
10. Jan: This is the January 2015 anomaly for that particular data set.
11. Feb: This is the February 2015 anomaly for that particular data set, etc.
17. ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months.
18. rnk: This is the rank that each particular data set would have for 2015 without regards to error bars and assuming no changes. Think of it as an update 35 minutes into a game.
| Source | UAH | RSS | Had4 | Sst3 | GISS |
|---|---|---|---|---|---|
| 1.14ra | 6th | 6th | 1st | 1st | 1st |
| 2.14a | 0.170 | 0.255 | 0.564 | 0.479 | 0.74 |
| 3.year | 1998 | 1998 | 2014 | 2014 | 2014 |
| 4.ano | 0.482 | 0.55 | 0.564 | 0.479 | 0.74 |
| 5.mon | Apr98 | Apr98 | Jan07 | Aug14 | Jan07 |
| 6.ano | 0.742 | 0.857 | 0.832 | 0.644 | 0.96 |
| 7.y/m | 18/4 | 18/7 | 0 | 0 | 0 |
| 8.sig | Nov92 | Feb93 | Nov00 | Sep95 | Aug04 |
| 9.sy/m | 22/9 | 22/6 | 14/9 | 19/11 | 11/0 |
| Source | UAH | RSS | Had4 | Sst3 | GISS |
| 10.Jan | 0.277 | 0.367 | 0.688 | 0.440 | 0.81 |
| 11.Feb | 0.175 | 0.327 | 0.660 | 0.406 | 0.87 |
| 12.Mar | 0.165 | 0.255 | 0.681 | 0.424 | 0.90 |
| 13.Apr | 0.087 | 0.174 | 0.656 | 0.557 | 0.73 |
| 14.May | 0.285 | 0.309 | 0.696 | 0.593 | 0.77 |
| 15.Jun | 0.333 | 0.391 | 0.728 | 0.575 | 0.79 |
| 16.Jul | 0.183 | 0.289 | 0.691 | 0.636 | 0.75 |
| Source | UAH | RSS | Had4 | Sst3 | GISS |
| 17.ave | 0.215 | 0.302 | 0.686 | 0.519 | 0.80 |
| 18.rnk | 3rd | 6th | 1st | 1st | 1st |
If you wish to verify all of the latest anomalies, go to the following:
For UAH, version 6.0beta3 was used. Note that WFT uses version 5.6. So to verify the length of the pause on version 6.0, you need to use Nick’s program.
http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt/tltglhmam_6.0beta3.txt
For RSS, see: ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt
For Hadcrut4, see: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.4.0.0.monthly_ns_avg.txt
For Hadsst3, see: http://www.cru.uea.ac.uk/cru/data/temperature/HadSST3-gl.dat
For GISS, see:
http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
To see all points since January 2015 in the form of a graph, see the WFT graph below. Note that UAH version 5.6 is shown. WFT does not show version 6.0 yet. Also note that Hadcrut4.3 is shown and not Hadcrut4.4, which is why the last few months are missing for Hadcrut.

As you can see, all lines have been offset so they all start at the same place in January 2015. This makes it easy to compare January 2015 with the latest anomaly.
Appendix
In this part, we are summarizing data for each set separately.
RSS
The slope is flat since January 1997 or 18 years, 7 months. (goes to July)
For RSS: There is no statistically significant warming since February 1993: Cl from -0.023 to 1.630.
The RSS average anomaly so far for 2015 is 0.302. This would rank it as 6th place. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2014 was 0.255 and it was ranked 6th.
UAH6.0beta3
The slope is flat since April 1997 or 18 years and 4 months. (goes to July using version 6.0beta3)
For UAH: There is no statistically significant warming since November 1992: Cl from -0.007 to 1.723. (This is using version 6.0 according to Nick’s program.)
The UAH average anomaly so far for 2015 is 0.215. This would rank it as 3rd place, but just barely. 1998 was the warmest at 0.483. The highest ever monthly anomaly was in April of 1998 when it reached 0.742. The anomaly in 2014 was 0.170 and it was ranked 6th.
Hadcrut4.4
The slope is not flat for any period that is worth mentioning.
For Hadcrut4: There is no statistically significant warming since November 2000: Cl from -0.008 to 1.360.
The Hadcrut4 average anomaly so far for 2015 is 0.686. This would set a new record if it stayed this way. The highest ever monthly anomaly was in January of 2007 when it reached 0.832. The anomaly in 2014 was 0.564 and this set a new record.
Hadsst3
For Hadsst3, the slope is not flat for any period that is worth mentioning. For Hadsst3: There is no statistically significant warming since September 1995: Cl from -0.006 to 1.842.
The Hadsst3 average anomaly so far for 2015 is 0.519. This would set a new record if it stayed this way. The highest ever monthly anomaly was in August of 2014 when it reached 0.644. The anomaly in 2014 was 0.479 and this set a new record.
GISS
The slope is not flat for any period that is worth mentioning.
For GISS: There is no statistically significant warming since August 2004: Cl from -0.118 to 1.966.
The GISS average anomaly so far for 2015 is 0.80. This would set a new record if it stayed this way. The highest ever monthly anomaly was in January of 2007 when it reached 0.96. The anomaly in 2014 was 0.74 and it set a new record.
Conclusion
There might be compelling reasons why each new version of a data set shows more warming than cooling over the most recent 15 years. But after so many of these instances, who can blame us if we are skeptical?
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Looking at the last graph in the original post which compares trends since January 2015, what struck me was the complete divergence in the direction of the trends between different data sets. Throw out the SST line since that measures something different from the rest, and even throw out the satellite data for the moment. The divergence between GISS and HadCrut tops off at about 0.1C. Doesn’t this mean that the uncertainty of the adjustment process has to be at least 0.1C – and that’s not even considering instrumentation errors or biases in the adjustment process.
If you assume that the satellite trends and the surface temperature trends should match up, which is I think a sound assumption, then the uncertainty jumps to a minimum of 0.2C. In other words, starting from the common base of January 2015, some scientists say “we’ve warmed 1.5C by our adjusted data” while others say we’ve cooled by 0.5C. This 2.0C discrepancy covers between about 25-40 percent of the anomalies in the years that the surface temps say achieved the record temperatures, making the “record temperature” claim worthless. This 2C discrepancy covers all the warming shown by RSS since it started and about 2/3 of all the warming shown by UAH.
You may be interested in this earlier post of mine where I pointed out a difference of almost 0.2 C. See:
http://wattsupwiththat.com/2013/12/22/hadcrut4-is-from-venus-giss-is-from-mars-now-includes-november-data/
(Snipped because sockpuppetry is frowned upon. -mod)
Well I can’t be held accountable for what you believe.
I had to start using a different WordPress account because my original account no longer works on this forum.
Others have commented in other forums about the mysterious censorship that happens on this forum, so I probably should not be surprised. Some find true skeptics confronting.
[it goes with your mysterious shape shifting multiple identities here -mod]
I responded here:
http://wattsupwiththat.com/2015/10/01/is-there-evidence-of-frantic-researchers-adjusting-unsuitable-data-now-includes-july-data/#comment-2039214
However Steven Mosher appears to have missed it, since he repeated that assertion about cooling at least 3 times since then. So I will repeat below exactly what the “ two authors of this post” are talking about. And I challenge Mosher to find a new adjustment in which the last 16 years were cooler than before the adjustment.
In the following post:
http://wattsupwiththat.com/2014/10/05/is-wti-dead-and-hadcrut-adjusts-up-again-now-includes-august-data-except-for-hadcrut4-2-and-hadsst3/
I wrote:
“From 1997 to 2012 is 16 years. Here are the changes in thousandths of a degree with the new version of HadCRUT4 being higher than the old version in all cases. So starting with 1997, the numbers are 2, 8, 3, 3, 4, 7, 7, 7, 5, 4, 5, 5, 5, 7, 8, and 15. The 0.015 was for 2012. What are the chances that the average anomaly goes up for 16 straight years by pure chance alone if a number of new sites are discovered? Assuming a 50% chance that the anomaly could go either way, the chances of 16 straight years of rises is 1 in 2^16 or 1 in 65,536. Of course this does not prove fraud, but considering that “HadCRUT4 was introduced in March 2012”, it just begs the question why it needed a major overhaul only a year later.”
And how do you suppose the last 16 years went prior to this latest revision? Here are the last 16 years counting back from 2013. The first number is the anomaly in Hadcrut4.2 and the second number is the anomaly in Hadcrut4.3: 2013(0.487, 0.492), 2012 (0.448, 0.467), 2011 (0.406, 0.421), 2010(0.547, 0.555), 2009 (0.494, 0.504), 2008 (0.388, 0.394), 2007(0.483, 0.493), 2006 (0.495, 0.505), 2005 (0.539, 0.543), 2004(0.445, 0.448), 2003 (0.503, 0.507), 2002 (0.492, 0.495), 2001(0.437, 0.439), 2000 (0.294, 0.294), 1999 (0.301, 0.307), and 1998(0.531, 0.535). Do you notice something odd? There is one tie in 2000. All the other 15 are larger. So in 32 different comparisons, there is not a single cooling. Unless I am mistaken, the odds of not a single cooling in 32 tries is 2^32 or 4 x 10^9. I am not sure how the tie gets factored in, but however you look at it, incredible odds are broken in each revision. What did they learn in 2014 about the last 16 years that they did not know in 2013?
“However Steven Mosher appears to have missed it, since he repeated that assertion about cooling at least 3 times since then. So I will repeat below exactly what the “ two authors of this post” are talking about. And I challenge Mosher to find a new adjustment in which the last 16 years were cooler than before the adjustment.”
Simple the GISS change in 2010 to the UHI adjustment. Cooled the trend.
here is clue.
NEVER ask a question in an interrogation when you dont know the answer or when you cant torture
the person you are interrogating.
Is my interrogation over? . GISS 2010, changed the adjustment for UHI, trend result? the trend was nudged down… not up. down.. not up. got that?
And now you will ask for an adjustment in the last 4 years….
or ask for a big adjustment that goes down not up
nice game.
Thank you for that! But are we talking about 1 in 10 where the slope was lower? See the following where the combined adjustments are clearly giving a higher slope over the most recent years.
Sorry! The above is from:
http://wattsupwiththat.com/2015/07/24/impact-of-pause-buster-adjustment-on-giss-monthly-data/
“Simple the GISS change in 2010 to the UHI adjustment. Cooled the trend.”
Can you link to the data before and after the change?
The only data I can find on GISS shows each new version is warmer.
I have just received information from WD for GISS:
Slope January 2000 to January 2010 using data set ending at January 2010,
downloaded mid-February 2010 = 0.003046847166680871
Slope January 2000 to January 2010 using data set ending at January 2011,
downloaded mid-February 2011 = 0.003834128128554841
So the slope is higher using 2011 numbers than using 2010 numbers. Which years were picked to show a cooling?
The best points in this thread are the obvious equatorial hot spots that are no evidence of.
Their absence backs the good data and places weight rueful wantonness onto the fraudsters.
“Note well that the total correction is huge. The range is almost the entire warming reported in the form of an anomaly from 1850 to the present.”
=======================================
This really is not the case, no matter how you look at it. Here is the net effect of all adjustments to temperature data (both land and ocean):
http://www.yaleclimateconnections.org/wp-content/uploads/2015/06/0615_Fig_5_576.png
If you want to look only at land records (and ignore the 2/3rds of the earth where adjustments actually reduce global warming), the effect of adjustments is somewhere on the order of 20% of the century-scale trend, mostly concentrated in the period before 1950. During the period of satellite overlap the effect of land temperature adjustments is negligible globally:
http://1.bp.blogspot.com/-lwQfxPaXFd0/VNoo9h7vUhI/AAAAAAAAAhA/iW8rexGjbgU/s1600/land%2Braw%2Badj.png
Only in the U.S. to adjustments account for a large portion of the trend (~50%), and the U.S. is subject to some large systemic cooling biases due to TOBs changes and LiG to MMTS instrument transitions.
If you want to test adjustments, do what Williams et al did. Use synthetic data with various types of biases added (in both directions), and see how the algorithm performs. Bonus points if you design it as a blind study like they did: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/williams-etal2012.pdf
Viola the global warming pause has vanished! And here people were debating for the last 10 years on the subject of a pause…
Absolutely.
If one looks at the unadjusted data for the period 1910 to 1940 and compare that to the adjusted data for the same period, they have removed about 0.3degC of warming.
Look at the rate of warming between about 1935 to 1940. In the unadjusted data, the rate is very steep, and in the adjusted data the rate of warming is significantly less.
All of this diminished the bounds of natural variation and what temperature changes can be wrought by natural variation. It no doubt assists the warmists argument that natural variation cannot account for the warming post 1950 and therefore that warming must be due to CO2.
Whether that is the unwitted outcome of the adjustments made, or whether the adjustments were deliberately made to such end is a different matter, but it seems to me to be impossible to objectively argue that these adjustments do not have a significant impact of the rate of warming over the last century and in particular the rate of change in temperatures prior to 1950 which even the IPCC accepts was down to natural variation, not driven by CO2.
Isn’t cooling the past effectively the same as warming the present? If so, pointing to the timing of the adjustments as an argument against them being a problem would seem disingenuous.
This is an issue that I feel Steve Mosher (upstream) glosses over: [..]The net effect of all adjustments to all records is to COOL the record[…] and similar statements.
As usual these days he doesn’t elaborate. Surely the real measure, if you care about this kind of thing, is the trend. Cooling the past increases the trend regardless of whether a final value is ‘cooler’ or ‘hotter’. If for example we remove the 20’s/30’s/40’s ‘blip’ and keep constant the pre-90’s rise then we have a ‘significant’ recent trend.
Another ‘play’ might be to ‘adjust’ values focusing on your chosen baseline period eg 1951-1980 (GISS). Or simply choose a different ‘baseline’ (more difficult).
The SM assertion that we would end up warmer using ‘raw’ data tells me nothing as with many of his one line cryptic comments or his ‘sceptics believe’ monologues.
Also ‘timing’ is a huge issue.
That the ‘pause’ gets removed just in time for Paris is simply something I expected. I just had no idea who would do it or how they would do it. I did know that a thousands of folk sitting around in some Paris meeting hall talking about a non-existent problem was never going to happen. One had to give and my bet was on the ‘pause’.
And lo…
Exactly! And as was shown above, RSS has a slope of 0 from January 1997, but GISS has a slope of 0.012/year from that point. So the question that someone needs to answer is how much of the 0.012/year for GISS is real and how much is due to a huge number of adjustments?
The average of the daily rate of change in min and max temperature from the NCDC GSoD data set from 1940 to 2013 is 0.0+/-0.1F
Day to Day Temperature Difference
Surface data from NCDC’s Global Summary of Days data, this is ~72 million daily readings,
from all of the stations with >360 daily samples per year.
Data source:
ftp://ftp.ncdc.noaa.gov/pub/data/gsod/
Code:
http://sourceforge.net/projects/gsod-rpts/
y = -0.0001x + 0.001
R² = 0.0572
This is a chart of the annual average of day to day surface station change in min temp.
(Tmin day-1)-(Tmin d-0)=Daily Min Temp Anomaly= MnDiff = Difference
For charts with MxDiff it is equal = (Tmax day-1)-(Tmax d-0)=Daily Max Temp Anomaly= MxDiff
MnDiff is also the same as
(Tmax day-1) – (Tmin day-1) = Rising
(Tmax day-1) – (Tmin day-0) = Falling
Rising-Falling = MnDiff
Brandon,
It is indeed. However, globally adjustments -warm- the past, they don’t cool it:
http://www.yaleclimateconnections.org/wp-content/uploads/2015/06/0615_Fig_5_576.png
Zeke Hausfather, um, okay? I guess if you have to keep pressing that message, you can, but that really doesn’t have anything to do with my question. I just asked about a simple relationship which goes to remarks like:
Because graphs like those could well be showing “the effect of adjustments” are being “mostly concentrated in the period before 1950” due to the choice of baseline. You aren’t actually showing where the effects of adjustments are when you show an anomaly chart unless you explicitly state you’ve left the baseline unchanged, something nobody’s ever stated. And given what I know of the methodologies being used, I’m relative certain you can’t make that guarantee. That’s why I called it disingenuous.
If I replotted those series, shifting one up or down to make it appear the effect of adjustments actually took place in modern times, I know of no reason that would be any less valid than the plots you’ve provided. Do you know of one?
Baseline effects the appearence of the graph, but not the trend. Land adjustments have little impact on 1970-present trends globally, and larger adjustments on 1900-present trends. That’s what I meant by most land adjustments cooling the past.
Zeke Hausfather:
Sure, but you’re not plotting a trend. You’re showing a graph whose appearance is affected by an arbitrary baseline, the choice of which helps create a visual impression which reinforces the point you want to make. That’s what I labeled disingenuous. You seem to be acknowledging my point.
I don’t dispute the point you now make, but that point is not the point you’re conveying in your graphs. The point you’re conveying in your graphs is a totally different point, a point which appears entirely dependent upon an arbitrary choice of baselines.
In other words, you’re making a bad argument by over-simplifying things with your responses to people. Anyone could rebut your graphs by simply replotting them with the baselines shifted, and you’d have no viable response because their versions would be as legitimate as yours. You would, of course, want to respond by talking about trends like you’ve done with me, but they’d just point out that’s not what you were talking about before. They’d point out you’re just changing the subject now that things have gotten inconvenient. And they’d be right.
If you want to talk about the effect of adjustments being “mostly concentrated in the period before 1950,” you cannot show people a graph where the baselines are forced to match in a period sometime after 1950. The series would be forced to align with one another even if there were meaningful adjustments post-1950 because the baselines are forced to match in that period. Your claim may be true, but the evidence you’re offering for it is completely disingenuous.
“As usual these days he doesn’t elaborate. Surely the real measure, if you care about this kind of thing, is the trend. Cooling the past increases the trend regardless of whether a final value is ‘cooler’ or ‘hotter’. ”
Cooling the past can be a slopppy way of describing what is going on, but it’s something that we’ve been talking about since 2007 or so.
Skeptics are fond of the phrase because the argument goes that we are somehow changing historical data.
As brandon notes IF you pick a different baseline you get a different looking chart. BUT GISS and HADCRUT CANT PICK DIFFERENT BASELINES ..there methods depend on picking a time period where dataset coverage is at a maximum. You could rebaseline afterwards..
That said… techically we should talk about adjustments increasing the trend and decreasing the trend.
But in skeptic land we tend to pick up your lingo. sorry,
Will you in very simple terms explain why when dealing with large series data sets in which only the maximum/minimum temperature over the preceding 24 hours is being ascertained, TOB is necessary adjustment outside cases where stations report observations in and around the warmest (or coldest0 part of the day.
Of course, there may be some unusual weather phenomena which occasionally means that a day will not follow the usual temperature profile, but these are no doubt rare and random.
I can see the need for TOB where a station makes its observations in the period 1 pm to say 3:30 pm, I cannot see the need for TOB when the station makes its observation outside those periods, say at 10am in the morning, or at 6pm at night when dealing with large series data sets.
Obviously if TOB is a moveable feast at a station then TOB would be necessary, but any station that has such a hap hazard approach to data collection should not be included in the combined data set.
I consider that it would be better to dispense with TOB adjustments altogether, and to simply use station data where the station does not make its observations in and around the warmest/coolest part of the day.
Perhaps you could answer how many stations make their observations between 12:30pm and 3:30pm, so that we can have some insight into the number of stations whose data may be significantly influenced by the TOB.
Hi Richard,
I explain TOBs in quite a bit of detail (and empirically test it using hourly data) over here: http://judithcurry.com/2015/02/22/understanding-time-of-observation-bias/
I appreciate your article on adjustments…
http://judithcurry.com/2015/02/22/understanding-time-of-observation-bias/
In the Curry article, you state…
“While the impact of adjustments that correct for these biases are relatively small globally (and actually reduce the century-scale warming trend once oceans are included) ”
And here…
” However, globally adjustments -warm- the past, they don’t cool it:”
According to this, as GISS and HadCrut have been adjusted, new versions must have a lower rate of global warming. Can you please point to data to confirm this?
Hi Mary,
I don’t have access to past versions of those land-ocean records. I only have access to the raw data and the current records, and can compare the two, as in the figure from Karl et al above. Its pretty clear that the century-scale trend in the adjusted land-ocean data is lower than that of the raw, due mostly to ocean adjustments.
Seems every time a new version of GISS or HadCrut are released, the warming trend is stronger. Yet you and Mosher insist the adjustments have reduced the temperature trend.
Do successive versions of HadCrut and GISS have stronger warming trends or do they not? It should be an easy question but I haven’t seen the data. I’m sure “Goddard” probably has it but I don’t trust his numbers.
“Seems every time a new version of GISS or HadCrut are released, the warming trend is stronger. Yet you and Mosher insist the adjustments have reduced the temperature trend.”
Nope The last version of GISS was 2010. The changes to UHI adjustments COOLED the record.
Figure 9a in that paper.. tiny amount, but it cooled the record.
Hadcrut, stopped adjusting data in their last version. temps went up.
Go figure
“Hadcrut, stopped adjusting data in their last version. temps went up. Go figure”
Makes sense. Every new version makes the warming trend stronger. At least that’s what the skeptics tell me. I’m still searching for the ACTUAL DATA that shows a new version with a lower global warming trend.
Then what do you call what happened in June of this year? In my previous post here:
http://wattsupwiththat.com/2015/08/14/problematic-adjustments-and-divergences-now-includes-june-data/
I said:
GISS
The slope is not flat for any period that is worth mentioning.
For GISS: There is no statistically significant warming since August 2003: Cl from -0.000 to 1.336.
The GISS average anomaly so far for 2015 is 0.82. This would set a new record if it stayed this way. The highest ever monthly anomaly was in January of 2007 when it reached 0.97. The anomaly in 2014 was 0.75 and it set a new record. (Note that the new GISS numbers this month are quite a bit higher than last month.)
If you are interested, here is what was true last month:
The slope is not flat for any period that is worth mentioning.
For GISS: There is no statistically significant warming since November 2000: Cl from -0.018 to 1.336.
The GISS average anomaly so far for 2015 is 0.77. This would set a new record if it stayed this way. The highest ever monthly anomaly was in January of 2007 when it reached 0.93. The anomaly in 2014 was 0.68 and it set a new record.
See Lord Monckton’s article here:
http://wattsupwiththat.com/2015/08/04/hadcrut4-joins-the-terrestrial-temperature-tamperers/
“Even though the satellites of RSS and UAH are watching, all three of the terrestrial record-keepers have tampered with their datasets to nudge the apparent warming rate upward yet again.“
2/3rds of the earth , sounds dramatic now tell how many actual measurements that was in practice, for to have a worthwhile value for such a large and diverse area you need a very large number or measurements and not just a few you ‘smug ‘ one across a very wide range which in reality it may poorly represent.
On the hand it could the usual case of ‘better than nothing ‘ which reflects the reality that for long time large parts of the planet had little of no coverage , and poor historic records of change to areas where there was coverage. Hence climate ‘sciences ‘ addiction to ‘magical proxies ‘ such has tree rings .
I am impressed with a “Global Land Temperatures, 5 year Smooth” that not only smooths the past but also forecasts the raw data five years into the future!
The line ends at 2014; the label 2020 is a bit off to the right. Probably should have removed that, but it was the default behavior of the graphing package I was using.
Zeke,
What do you think causes the difference in LiG and MMTS measurements? That could have a significant impact on the proper method of adjustment.
In my opinion BEST missed a golden opportunity.
It should have audited all the stations so that it could have worked with the cream not with the crud.
All stations should have undergone a physical site inspection and a thorough audit of siting, siting changes, equipment used, screen changes, maintenance/calibration of equipment, length of data, approach to data collection and record keeping etc. Only the best sited stations (equivalent to USCRN) should have been used. Any station that made its observations in and around the warmest/coolest part of the day (such that there was a significant risk that its daily records may double record the highs or the lows of the same day, not different days) should have been chucked out. BEST should only have used stations which had the best kept data streams where there was little need for any adjustments.
A reconstruction should then have been made up to the time when LIG ended, and another reconstruction made when LIG measurements were replaced. Preferably there would be no splicing of the two.
Just use the cream.
What we are doing now is simply ascertaining the probity and value of the adjustments, nothing more than that. The land based thermometer record is useless for scientific purposes.
Doing physical site inspections of 40,000 stations was a bit beyond the budget of the Berkeley Earth project.
However, we largely do what you suggest and “cut” each station when a instrument change, station move, or TOBs change is detected and treat everything after as a new station. Turns out it doesn’t really change the results much compared to NOAA’s approach though:
http://berkeleyearth.org/wp-content/uploads/2015/04/Figure1.png
Note that the figure above is for land temperatures. Could have been labeled more carefully.
Doing physical site inspections of 40,000 stations was a bit beyond the budget of the Berkeley Earth project.
Appreciated, at least on my part, but wouldn’t it be better to have 4000 sites with a good traceable and published history rather than have 40,000 sites of questionable history.
If one could point to the meta data of a (each and every chosen) particular site justifying adjustments using all the meta data we have then perhaps this would tend toward silencing critics.
Although I no longer care very much, I believe the work Paul Homewood did on the Icelandic temperature record is a case in point. The Icelandic Met had no idea how or why their ‘stated values’ had been subject to ‘adjustment’. Neither did I and I’m still collecting them, every hour, automatically.
Although Iceland is only a small region of the globe with only a small number of instrumentation sites, I felt that it was important for its strategic position. If the ‘unadjusted’ temperatures of Iceland tell us that it was just as ‘warm’ in 1935 as it is today then I would have thought that that data would be interesting to any ‘non aligned’ Scientist.
“In my opinion BEST missed a golden opportunity.
It should have audited all the stations so that it could have worked with the cream not with the crud.”
1. There are 40K stations. some no longer in existence.
All stations should have undergone a physical site inspection and a thorough audit of siting, siting changes, equipment used, screen changes, maintenance/calibration of equipment, length of data, approach to data collection and record keeping etc.
1. Go ahead.
Only the best sited stations (equivalent to USCRN) should have been used.
1. There is no field tested proof that CRN siting requirements need to be met.
2. Comparing 10 years of CRN to 10 years of bad stations, we find no difference.
Any station that made its observations in and around the warmest/coolest part of the day (such that there was a significant risk that its daily records may double record the highs or the lows of the same day, not different days) should have been chucked out. BEST should only have used stations which had the best kept data streams where there was little need for any adjustments.
1. Presumes you can specify what counts as best
2. There is a trade off between spatial uncertainty and measurement uncertainty
Steven and Zeke.
I am sorry to say that I find your response somewhat disingenuous. As we all know, the weather stations being used in the global land temperature anomaly data set were never designed and intended for the use to which they are now being put. A scientist does not design an experiment in a hap hazard manner not being concerned as to the results that it will deliver taking the view that he will adjust and homogenise poor results being returned with a view to getting something respectable. When designing an experiment, much thought is given to the quality of results that will be returned. If these weather stations are to be used for a task for which they were not designed, one has to give ground up consideration as to how best to constitute a network that will return good quality data fit for purpose. This starts with the stations themselves, their siting, the equipment being used, the method and approach to data collection and the general history of the site and its quality control/approach to data collection. That is the scientific approach to the task that was being undertaken by BEST.
You seem to suggest that because of numbers this was an impossible task, but It only needs some sensible collaboration, and a little imagination.
As you know, there are not 40,000 stations being used in the global temperature data set. As regards the 1880s, there were just a few hundred, eventually it peaked at about 6,500, and today only somewhere between 2,500 to 3,000 stations are being used. You are out by an order of magnitude. It is this distortion of the relevant numbers that I find to be rather disingenuous.
Essentially, one is looking at about a maximum of 3,000 stations. Every University offering a social science course could have conducted a field study involving their local weather station (the one in the 2,500 to 3,000 stations being used in the 21st century) and they could carry out the field study and site audit, examine the history of the station, its equipment, the data collection process etc. It would not be a very onerous task and would make a useful course study. That field study together with the detailed examination of the data from that site could then have been given to a team (at BEST) who could then have reviewed the collected data.
Many stations could quickly be disregarded as not meeting the USCRN siting standards. Indeed, the field study audit would on the first page of their study report detail whether it complied with that standard. So my guess is that the vast majority of stations could have been eliminated in 1 minute, and the team at best could quickly have found a pile of stations that complied with USCRN siting criteria in a matter of hours.
We know from the surface station study that relatively few stations in the US complied with those siting criteria. How many stations do you think there would have been in Europe, Africa (which is sparsely sampled), Australia (which is sparsely sampled), Russia (the Russian plains being sparsely sampled), South America (which is sparsely sampled) etc ? My guess is not many. I would be surprised if one would have been left with 500 stations world wide.
You mention trade off with spatial coverage. i agree that there is a trade off. Of course, there would be an issue with spatial coverage if one was left with only a few hundred stations, but this has never prevented people from claiming a global average temperature anomaly for the late 1800s (when there were less than 500 stations in the data set and the distribution of these stations was not at all global), and even today, the approx 2,500 stations being used provide little global spatial coverage but that does not seem to concern those who love the land thermometer record. Given that the US accounts for approximately 40% of all stations being used, one can see how little and sparsely the globe is sampled.
It is no surprise, and therefore no endorsement, that in the 21st century, that 10 years of USHCN station data follows closely the 10 years of USCRN station data. In the 21st century all the station have been fully saturated by UHI and are using the same sort of equipment. The issue here is what was the last century like when there were fundamental changes, when small hamlets became towns, which then became cities etc. It is the post war changes through to the 1990s which are particularly important (rendered even more significant given that the IPCC only subscribed AGW to the post war period).
The point is a simple one. Any scientist would wish to get the best quality data possible with which to work with, and that is the first task that BEST should have undertaken. It should have separated the cream from the crud, and then worked only with the cream. It is a task that could have been done with a little imagination.
.
“As you know, there are not 40,000 stations being used in the global temperature data set. As regards the 1880s, there were just a few hundred, eventually it peaked at about 6,500, and today only somewhere between 2,500 to 3,000 stations are being used. You are out by an order of magnitude. It is this distortion of the relevant numbers that I find to be rather disingenuous.”
wrong.
You and everyone else continues to focus on GHCN-M. guess what… we dont use that as our only source.
we use GHCN-D, and GCOS, and other sources… actually all publically available sources.
There are 40,000 stations in the dataset. they all get used. Some are very short.. which is actually good
as long stations tend to undergo changes– instruments, locations, land use, obsevers, TOB… ect
Today there are about 10,000 stations that get used… not 2500
here is the northern hemisphere
http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Figures/northern-hemisphere-TAVG-Counts.pdf
Try again.
Or consider this. How is it that the horrible stations in the US MATCH THE PERFECT triple redundant
CRN stations?
splain that and I will answer your next question
Steven
First let me say that I much appreciate you and Zeke commenting on this article, and I expect you both to forcefully defend your corner. However, and this is human nature, there is a tendency to lose objectivity when one comes too closely and personally involved in something.
Second, I was unaware that BEST was working with 40,000 stations in its reconstructions. It would be useful for you to provide details of their localities so we can see how spatial the coverage is on a global basis. If BEST are working with 40,0000 stations then I consider my comment that I found your earlier a little disingenuous to be a bit harsh and I apologize for that.
Third, I remain of the view that y main point stands, namely that it would be better to work with the cream, not the crud. There is no need to have more than a few hundred good quality stations.. Indeed, even you make that point: “… If you had 60-100 OPTIMALLY placed stations.. with good history back to 1750
that would be all you needed.” I fully concur with that, but of course, there never will be optimally placed stations from a global spatial perspective when dealing with the old stations since less than 10% of the globe is measured. The US and the UK have high density coverage, but if you look at other areas the spatial coverage is extremely poor.
Fourth. BEST obviously made a decision to use 40,000 stations. It did not need to go down that route. It could instead have gone down the route of finding the best 100 or so stations and to use the data from only those stations. In my opinion that would have been the better approach.
Fifth, It would be good to go back to the 1750s and see the rebound from the LIA, but that is unrealistic from a global perspective. It is probably unrealistic to back before the turn of last century, which is unfortunate since the 1880s were undoubtedly a warm period and we have the impact of Krakatoa which is worthy of study. But we have to accept that weather stations were thin on the ground, and one cannot create data where none exists.
Sixth, I find the land based thermometer record of little interest for many reasons, not least that it does not even measure the correct metric (ie., energy) and the heat capacity of the atmosphere is dwarfed by that of the oceans. There can be no significant global warming unless the oceans are warming, and the rate of ocean warming will buffer and determine the rate of global warming. So personally, I am only concerned with the ocean data, and if we want to know something about the atmosphere, we have the satellite data (which has its own issues) but at least it has spatial global coverage.
Seventh, and this is not directed at you or BEST, but had this been a serious science, and had the IPCC really desired to get to the bottom of this issue, as soon as it was set up, it would have rolled out on a global basis the equivalent of the USCRN network (ie., reference quality stations worldwide) and ARGO. This would have been the number one agenda, and would have started before FAR.
Eighth. You state: “6. C02 in fact warms the planet” With respect, we do not know whether that is a fact. CO2 is a radiative gas (not a ghg) and its laboratory dynamics are well known, but the Earth’s atmosphere is not laboratory conditions, and how CO2 plays out and what effect it has in the complex interaction of the Earth’s atmosphere is yet to be determined. And the problem here is the quality of the data available since if there was sufficient good quality data, we would be not having a debate as to whether CO2 warms the planet, and if so by so much, but rather would warming be a net positive. Within the limitation of the data available, we can find no signal to CO2, and that is why the IPCC does not set out a plot of the signal and why there is so much uncertainty surrounding Climate Sensitivity to CO2, and why there are numerous models all outputting different projections.
Thus all we can say at this stage is that the signal (climate sensitivity) to CO2 is so small that within the limitations of our best measuring devices and their error bounds, the signal is undetectable over and above the noise of natural variation and cannot be separated therefrom. Given that, can climate sensitivity be large, well that depends upon the width of the error bounds of our best measuring devices (by which I include the lack of data length and the problems with spatial coverage as well as practical issues such as sensitivity etc). If these error bounds are large, the possibility exists that climate sensitivity could be large, but if the error bounds are small then climate sensitivity (if any) must likewise be small.
Personally, I consider the error bounds to be very much underestimated (and as part of that issue, is the comment you make “The skeptics BEST ARGUMENT is the one Nick Lewis does… its about sensitivity”). I agree that sensitivity is one of the issues.
“Appreciated, at least on my part, but wouldn’t it be better to have 4000 sites with a good traceable and published history rather than have 40,000 sites of questionable history.”
Err No.
Lets put it this way. If you had 60-100 OPTIMALLY placed stations.. with good history back to 1750
that would be all you needed.
But we dont have that.
Now you could “define” what you mean by good history and select the best AFTER you made that decision.
Then you could do tests.
basically you can test which is better 4000 “good” stations or 40000 “other stations”
Its a trade off between SPATIAL UNCERTAINITY ( how much temp varies across space ) and measurement uncertainty.
Lots of folks suppose that fewer better stations is “better” they dont actual test this supposition.
here is an easy way
There are 110 CRN stations. Lets call those best.
Got it? HOLD THOSE OUT.. hide them from your analyst
There are 20000 “other” stations in the US.
next:
Take the 20000 and create a expected temperature field for the US
use that field to PREDICT what CRN will say.
use the bad to predict the good
Then take 2000 of those “other” stations and predict what CRN will say
Compare the predictions using 20K sites and 2 K sites.
Which will predict better?
Actually, there are a few score geographically representative, century-long records which, though far from optimally sited, provide a relatively UHI-free glimpse of world-wide temperature variations over the twentieth century. They show virtually no secular trend and strong low-frequency variations that are largely suppressed in BEST’s elaborately rationalized, humongous calculations. I intend to prepare these striking, straightforward results for publication next year.
good luck. most long records are the worst.
Long vetted records are indeed the worst…for concealing actual long-term variations. If you want to play God with climate data, chop up the records into small pieces and re-assemble them according to the preconceptions of academic theory. That how bogus “trends” are manufactured!
“This should put to rest the notion that the strong El Nino of 1998 had any lasting affect on anything.”
Well it depleted ocean heat content, which has a net negative effect on following surface temperatures.
True. Then the pause resumed after a couple of years.
A couple of years later was towards the end of a multi year La Nina which post 2001, raised the running average by around 0.2°C above pre 1998 temp’s, surface temp’s then leveled off after that.
Werner Brozek October 2, 2015 at 5:03 am says:
‘ “Well it depleted ocean heat content, which has a net negative effect on following surface temperatures.” True. Then the pause resumed after a couple of years. ‘
Not quite true. Before it resumed, a step warming intervened and raised global temperature by a third of a degree Celsius in only three years. That is not negative. It made the current hiatus platform that was then created warmer than any twentieth century temp. had been, except for 1998. Even Hansen noticed that and wanted to recruit that warming into his greenhouse brigade. That is quite impossible because such a short warming can in no way be caused by the greenhouse effect.
You are right about the step change, however with respect to the negative, 1998 set a record and is now listed as 0.536 on Hadcrut4, but 1999 is now 0.307 and 2000 is 0.295.
“Figures rarely lie but liars frequently figure”.
I work with a couple of really good stat guys. They can make the data look any way you want. Especially when you have such a huge data set. A tiny tweak here or there can make all the difference.
What I suspect is going on is that adjustments that make the record look warmer are embraced with enthusiasm. The adjustments that make the overall record look cooler are discouraged or ignored.
This could happen pretty easily without malice. Very easy with malice.
Take for example the case of “Painting the Obs Box”…
Start with a nice new thermometer shelter at a nice new airport in the countryside. Average temperature is 50. Slowly, the box gets dirty and the venting clogs with spider webs and gunk. Because of this, the temperature creeps up 0.1 deg per year. Ten years later, the temp is 51. The obs shelter is cleaned, painted or replaced. The temp instantly drops a degree. The adjustment algo finds “break points and snips”. So, the quick drop from 51 to 50 is eliminated but the slow, bogus warming from 50 to 51 is retained. The original temp is adjusted lower to 49.
Cycle and repeat. Imaginary warming created.
This kind of problem can be conveniently ignored or discounted by the adjusters.
“I work with a couple of really good stat guys. They can make the data look any way you want. Especially when you have such a huge data set. A tiny tweak here or there can make all the difference.”
wrong.
take 40,000 sites. you will get one answer. call it 52
take any 5000 of those sites ( we did multiple times) you get ~52
take any 1000 you get ~52
take only the rural sites… you get ~52
take only long series…. you get ~ 52
forget to do QC….. you get ~52
basically the LLN is very forgiving.
if you take only raw data FOR LAND you’ll get abou ~45
That is for the land adjustments range around 10% depending on the time period.
The skeptics BEST ARGUMENT is the one Nick Lewis does… its about sensitivity
Since you clearly know everything, I’ll ask the same question again…
Can you point to a specific newer version of GISS or HadCrut that showed a lower global warming trend than the previous version?
Yes. For GISS their last major update was 2010.
They changed how they adjusted for UHI (and that dropped the warming trend. That’s right, The trend dropped, see figure 9a
For HADCRUT their last major update was the switch to version 4.
for version 4 they included more stations ( less interpolation error ) and they DROPPED their adjustment code.
The effect was to warm the record.
So, there you have it. One set of changes cooled the record, another group warmed the record
Clearly some people did not read the secret memo
Sept 2015 estimated temps will be slightly higher than August. Up by an average of 0.03 Deg
http://postimg.org/image/jrrt7wrdp/47d8dfeb/
In the past 12 months, satellite data has warmed more than surface-based.
http://postimg.org/image/63hi921wx/9b76cef2/
Since 1979, the hourly CFS model initial conditions have closely tracked the satellite data, not the surface based adjusted data.
http://postimg.org/image/pcdha150d/aaffe59f/
Mary,
Agreed. I posted something similar above and it was ignored. That’s another part of confirmation bias–it makes it easy to ignore or gloss over potential problems with the results.
CFS ? oy vey.
That still doesn’t answer her question.
her question is answered
For those who aren’t following along here….
The CFS is “Climate Forecast System”.
http://cfs.ncep.noaa.gov/
To run, the model must be initialized properly.
To initialize models, all available data is gathered and error-checked and gridded into the model to get the best possible initial conditions. This includes sfc obs, balloon data, buoys, satellite info, ship reports, air recon, radar, etc. The initialization is important to a good model run and is objective and automated and done hourly.
Interestingly, it also creates an hourly, global grid of 2m surface temperature which can be found here… http://models.weatherbell.com/temperature.php#!prettyPhoto
I’ve seen the record back to 1979. It closely tracks along with GISS, HadCrut, UAH and RSS as you might expect.
Since 2000, the sfc data and satellite data have diverged. The trend in the CFS data more closely matches the trend from the satellites…showing very little warming.
Mosher is dismissive of this data. That’s fine. There may be good reason to dismiss it. I’d like to hear those reasons. I’ve never heard it discussed before.
Goddard has a new page out to protect himself against Chairman Mao type and Stazi style programs.
https://stevengoddard.wordpress.com/2015/10/01/the-price-of-freedom-is-eternal-vigilance-2/#more-129104
I hope he brought a tinfoil hat…
Zeke,
I read your post at Prof Curry’s blog and appreciate your input here. And I know that you emphasize the need for adjustments based things like the change from MMTS sensors to LiG sensors that lead to messy data sets over time.
However, I haven’t seen any response to questions about the physical reasons for the difference in measurements between MMTS and LiG. As this could impact the way adjustments are made, I’m curious if you have found studies explaining the cause of the differences?
Zeke,
Let’s pretend ‘Goddard’ is crazy, as you imply. Two thoughts:
1. That’s your argument, so you fail, and…
2. Better to be crazy than mendacious.
Climate alarmists are mendacious when they try to argue science, because the debate by now is 97% political. Your ‘science’ is just a thin, 3% veneer.
Honest scientists look at the absence of global warming for many years, and conclude that the original CAGW premise was simply wrong. The others try to rationalize it.
“However, I haven’t seen any response to questions about the physical reasons for the difference in measurements between MMTS and LiG. As this could impact the way adjustments are made, I’m curious if you have found studies explaining the cause of the differences?”
RIF google is your friend
Hi Atkers1996,
I was curious about MMTS adjustments myself a few years back. So I looked at pairs of nearby stations, one of which transitioned to MMTS and the other which did not. The effect is really hard to miss: http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/
There are also a number of studies of side-by-side stations: http://ams.confex.com/ams/pdfpapers/91613.pdf
Thanks, for the response. I agree, the effect seems pretty clear. I’m asking about the cause, though. It seems that determining the reason for the difference is just as important validating the difference. I’ve read the Ft Collins analysis, but it focuses on verifying a constant (or somewhat constant) difference–not on attempting to discern the physical differences that cause the difference.
“All it requires is a failure in applying double blind, placebo controlled reasoning in measurements.”
http://static.berkeleyearth.org/posters/agu-2012-poster.png
What you see above is just one of the double blind tests performed to see if adjustment codes are biased.
1. eight worlds are created. one world is defined as ground truth. 7 worlds have various artifacts added
to the data. jumps, drifts, etc.
2. The teams correcting the worlds have no idea which world is uncorrupted and which are corrupted or how.
3. The teams apply their methodology and the results are scored.
did your adjustments screw up good data? did you correct data in the right direction.
A similar study was done in 2012
The point is this.
A) the methods were tested in a double blind fashion
B) the methods correct the errors– they move the answer closer to the truth.
Stephen Mosher
I appreciate your rigour of sticking to the pure science, But even the most ardent alarmist would, or should, be concerned at the implications of the figure put up by S or F.
http://wattsupwiththat.com/2015/10/01/is-there-evidence-of-frantic-researchers-adjusting-unsuitable-data-now-includes-july-data/#comment-2039301
From a genuinely skeptical point of view (in the scientifc sense) it surely raises some alarm bells if it is real, and I havent run the data myself. But surely it is not just by “chance”
Any comments
Rgds
It’s pretty much garbage.
1. USHCN is 1200 stations.
2. If you did the same chart for african adjustments youd get the opposite answer
3. I personally dont use USHCN in any of my work, I prefer raw daily.
4. Like with solar crap if you look in enough places you will find my shoe size corrrelated with something.
5. The adjustment code has been tested in a blind test.
6. C02 in fact warms the planet
7. S&F doesnt show his data or methods
should I go on?
There ARE REAL PROBELMS in the surface temperature products.. Let me tell you what they are.
A) LOCAL small scale accuracy
B) Potential micro site
C) Potential UHI
That’s it. If people put their brains on the real problems we will get better answers.
otherwise yall look like tin foil hat wearing types
” 6. C02 in fact warms the planet”
Actually, I always thought that “greenhouse gases”, such as CO2, prevented the planet from cooling.
Still trying to find out what is the average yearly temperature (“best estimate”) of the “lower troposphere”, the whole of it, from surface to 12 000 m -whatever that is; over a 30 year span- as per UAH and RSS. If there is an anomaly, there must be an absolute temperature to refer to?
I see that it is about -47 C at 11 000 m and it is about 15 C on the ground as an average. So splitting the difference gives – 16 C half way up. However it probably does not go up uniformly due to the effects of water vapor. As well, I understand that not all parts are weighted equally.
This is not my area of expertise so if you want further details, you will have to ask Dr. Spencer refer you to additional information.
RSS posts absolute.
Thank you, what is then the yearly 30 year average for sea level/ground temperature? FM
Steven
I do not know whether you are still following this article. If you are, I have responded to your Steven Mosher
October 2, 2015 at 8:05 pm comment at richard verney October 3, 2015 at 4:27 am.
You have kindly posted details of the number of stations used by BEST http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Figures/northern-hemisphere-TAVG-Counts.pdf
It would be useful if you would post details of the location and global distribution of these stations. Hopefully, BEST has these details (plotted on a global map) say at 20 or so year interval, such that we can see where the various stations that are used are located in say 1750, 1775, 1800, 1825 etc to date. Let’s see the global/spatial coverage of these stations over time. That would be very useful.
Do you have similar data for the Southern Hemisphere. how many stations and their global distribution over the years? if so, please post those details since I am sure that I am not alone and that there are many who would like to see this important and useful data.
For my part, my earlier comments were based upon: the chart set out in the recent post on WUWT on 24th September: “Approximately 66% of global surface temperature data consists of estimated values”
The main chart:
To me, this chart suggested (and my reading and understanding of that chart may have been wrong) that the global GHCN temperature data set peaked at just under 6000 stations in 1975, and by 1995 was down to only 3000 stations, and by 2005 it was down to only about 2600 stations. As I say, may be I am misunderstanding the significance of that chart, and the role that it plays in the GISS temperature anomaly reconstruction.
Dr Brown
You say “I was managing to convince myself of the general good faith of the keepers of the major anomalies. This correction, right before the November meeting, right when The Pause was becoming a major political embarrassment, was the straw that broke the p-value’s back. I no longer consider it remotely possible to accept the null hypothesis that the climate record has not been tampered with to increase the warming of the present and cooling of the past and thereby exaggerate warming into a deliberate better fit with the theory instead of letting the data speak for itself and hence be of some use to check the theory.”
and
“Why was NCDC even looking at ocean intake temperatures? Because the global temperature wasn’t doing what it was supposed to do! Why did Cowtan and Way look at arctic anomalies? Because temperatures there weren’t doing what they were supposed to be doing!”
//////
But this bias and lack of objectivity is demonstrated earlier, with ARGO. When ARGO initially returned results, it was showing ocean cooling. The oceans were not doing what they were supposed to do, ie., be warming, so NAS looked into this. They would not have looked into this had the oceans been warming.
The result of their enquiry (I would not call it an investigation) was that ‘obviously’ some ARGO floats were faulty showing a cooling bias, and these ARGO floats should be removed from the data series; SEE:
http://earthobservatory.nasa.gov/Features/OceanCooling/page1.php
So raw data which was showing a cooling trend was adjusted by removal of some of the data so that it now showed a warming trend! When have we seen that before!
But this was not at all scientific. Had a scientific approach been adopted, the putative erroneous floats (ie., those showing a cooling trend) would have been identified and a random sample of these would have been returned to the laboratory for testing to see whether the instrumentation was or was not faulty. It appears that this was never done.
Further, if some of the floats had faulty equipment showing a cooling trend, perhaps other floats had faulty equipment erroneously showing a warming (or greater warming) trend. Those floats showing the fasted rate of warming would have been identified and a random sample would have been returned to the laboratory for equipment testing. Again, it appears that that was not done.
Why was there no attempt to test the equipment to see whether the equipment was faulty? Why was it simply assumed to be faulty without proper verification? Why was it thought appropriate to simple remove those floats showing annoyingly a cooling trend without proper investigation to confirm actual equipment failure.
It would appear that the answer is the same as why someone would now seek to adjust ARGO data to bring it in line with ship log data. Ship log data is cr*p. I say this from personal experience having reviewed hundreds of thousands (possibly millions) of ship log data entries covering weather conditions and ocean temperature. No one in their right mind would seek to calibrate ARGO data (which has the potential to be high quality) with cr*ppy ship log data. That is not scientific.
The lead post’s title questions ‘Is There Evidence of Frantic Researchers “Adjusting” Unsuitable Data? (Now Includes July Data)’.
My idea is there are scientists, ones focused on climate, who have an ‘a priori’ premise of a pre-science** nature. That premise is man’s burning of fossil fuels must cause significant unambiguously discernable warming that must cause net harm. The work product of those scientists sought not the determination of the truth / falsity of their ‘a priori’ premise of a pre-science nature because they do not question the truth of their premise. Their work product seeks only the public perception that there is warming and harm. They are working to fulfill the prophecy of their pre-science ‘a priori’ premise.
When confronted with aspects of reality not confirming their ‘a priori’ premise of a pre-science nature then they seek to change the realities to conform or to repress the realities.
** ‘pre-science’ means having has ideological and/or belief basis rather than a logical objective scientific basis.
John
What you’re seeing here is Real Science being turned into a faux religion. My opinion is in the near future you will see it and CERN fall on their faces. Then they’ll hand you a New Green Pope, cause the old one signed on to the Earth is flat and AGW.
18:40 into this video they cover the most outrageous example of data manipulation. The IPCC literally drops established charts that don’t agree with their desired conclusion and replace the existing charts with garbage like the “Hockeystick.”
https://youtu.be/0LbqU3spW90?t=18m40s
I doubt that many are still following this thread, but this is a point that should be made.
The land based thermometer record is a proxy record, and GISS, BEST and the like keep on using different proxies in their reconstruction.
If the record is to show the warming going back to 1880, then one should identify the stations that were returning data in 1880. One then identifies how many of those stations have a continuing and extant record going from 1880 to today. One should then use only those (ie., the ones which have a continuous record from 1880 to date) stations in the reconstruction.
Thus for example, if in 1880 there were only 350 stations and of those stations 100 have fallen by the wayside, one is left with 250 stations and it is these stations and these alone that should be used in the presentation of global temperatures between 1880 to date.
If by 1900 there were say 900 stations of which say 250 have fallen by the wayside then one would have 650 stations which should form the basis of any reconstruction between 1900 to date. Only the 650 stations that have a continuous and existing record throughout the entire period should be used.
If by 1920 there were say 2000 stations of which 450 have fallen by the wayside, one would be left with 1550 stations, and it these stations, and only these stations, that should be used in the temperature reconstruction from 1920 to date.
The land thermometer data should not consist of a constantly changing set of station data in which the siting and spatial coverage (and hence weighting) is constantly changing. This does not produce a valid reconstruction time series.
Those producing the record should present data which contains precisely the same stations throughout the entirety of the time series. What could be presented is a series of plots in 10 year intervals to date, ie., 1880 to date, 1890 to date, 1900 to date, 1910 to date etc etc. On the top of each separate time series the number of stations being used in the reconstruction could be detailed with an explanation as to their split between NH and SH.
That sounds like good logic. +1
Richard:
The index manufacturers, who are all busy in papering over the huge gaps in spatial and temporal coverage provided by (usually urban) station data have indeed lost sight of geophysical fundamentals. In particular, because of highly inconsistent “trends” and distinctly different spectral signatures, one cannot shuffle different sets of stations in and out of the global index calculations and hope to obtain a meaningful result for whatever secular changes may have taken place over times scales of a century or longer. This effectively allows spatial differences to mask and confound temporal ones. The only sure way to control spatial variability is to eliminate it by maintaining the IDENTICAL set of stations throughout entire period of record.