Guest Post by Werner Brozek, Edited by Just The Facts:
The above graphic shows that RSS has a slope of basically zero since October 1996, or a period of 18 years and 3 months. UAH, version 5.5 has a slope of basically zero for 10 years, since January 2005. I would like to thank Walter Dnes for determining the values. WFT does not show version 5.6, however the length of time for a flat period on this version is since January 2009, or an even 6 years. In contrast, the other three data sets I report on have a flat period of less than a year which in my opinion is not worth being called a pause.
Why is there this difference between the satellites and the surface measurements? Is one more accurate than the other?
There have been a numerous recent articles on the adjustments made to the surface temperature records, in Bolivia, China, Paraguay, and elsewhere; and we recently had a Meterologist in Germany who noted that:
“One reason for the perceived warming, Hager says, is traced back to a change in measurement instrumentation. He says glass thermometers were was replaced by much more sensitive electronic instruments in 1995. Hager tells the SZ:
‘For eight years I conducted parallel measurements at Lechfeld. The result was that compared to the glass thermometers, the electronic thermometers showed on average a temperature that was 0.9°C warmer. Thus we are comparing – even though we are measuring the temperature here – apples and oranges. No one is told that.'”
Below, I would like to compare the final rankings of 2014 with respect to other years for the 5 data sets that I report on as well as for UAH, version 5.6. I will do so in three parts. In the first part, I will give the ranking without regards to how close 2014 is to any other year. In the second part, I will assume that any other anomaly average that is up to 0.03 above or below the 2014 is in a statistical tie with 2014. In the third part, I will expand on this and assume that any anomaly that is within 0.1 of the 2014 anomaly is in a statistical tie with 2014.
So for the first part, the rankings are as follows:
UAH version 5.5: 7th
UAH version 5.6: 3rd
RSS: 6th
HadCRUT4: 1st
HadSST3: 1st
GISS: 1st
The above ranks are the ones that appear on line 24 of the present table. Furthermore, these same numbers will appear on line 1 of the new table when I give the data for 2015.
For the second part, here are the rankings if we assume that any anomaly that is 0.03 above or below the 2014 ranking is in a statistical tie with 2014:
UAH version 5.5: a statistical 7 way tie from ranks 4 to 10
UAH version 5.6: a statistical 3 way tie from ranks 3 to 5
RSS: a statistical 4 way tie from ranks 6 to 9
HadCRUT4: a statistical 4 way tie from ranks 1 to 4
HadSST3: It remains in 1st by itself
GISS: a statistical 3 way tie from ranks 1 to 3
For the third part, here are the rankings if we assume that any anomaly that is 0.1 above or below the 2014 ranking is in a statistical tie with 2014:
UAH version 5.5: a statistical 12 way tie from ranks 3 to 14
UAH version 5.6: a statistical 9 way tie from ranks 3 to 11
RSS: a statistical 12 way tie from ranks 3 to 14
HadCRUT4: a statistical 11 way tie from ranks 1 to 11
HadSST3: a statistical 6 way tie from ranks 1 to 6
GISS: a statistical 10 way tie from ranks 1 to 10
For those who may be interested, this is how HadCRUT3 would have done if it were still around. Assuming that HadCRUT3 would have gone up as much from 2013 to 2014 as HadCRUT4 did, then HadCRUT3 would have had a 2014 anomaly of 0.529. This would have placed it in 2nd place. Prior to this year, 1998 was at 0.548 and 2005 was at 0.482.
In the sections below, as in previous posts, we will present you with the latest facts. The information will be presented in three sections and an appendix. The first section will show for how long there has been no warming on some data sets. At the moment, only the satellite data have flat periods of longer than a year. The second section will show for how long there has been no statistically significant warming on several data sets. The third section will show how 2014 compares with 2013 and the warmest years and months on record so far. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.
Section 1
This analysis uses the latest month for which data is available on WoodForTrees.com (WFT). All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative on at least one calculation. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.
1. For GISS, the slope is not flat for any period that is worth mentioning.
2. For HadCRUT4, the slope is not flat for any period that is worth mentioning. Note that WFT has not updated Hadcrut4 since July and it is only Hadcrut4.2 that is shown.
3. For HadSST3, the slope is not flat for any period that is worth mentioning.
4. For UAH, the slope is flat since January 2005 or an even 10 years. (goes to December using version 5.5 and based on Walter Dnes’ calculation.)
5. For RSS, the slope is flat since October 1996 or 18 years, 3 months (goes to December).
The next graph shows just the lines to illustrate the above. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the upward sloping blue line at the top indicates that CO2 has steadily increased over this period.
When two things are plotted as I have done, the left only shows a temperature anomaly.
The actual numbers are meaningless since the two slopes are essentially zero. No numbers are given for CO2. Some have asked that the log of the concentration of CO2 be plotted. However WFT does not give this option. The upward sloping CO2 line only shows that while CO2 has been going up over the last 18 years, the temperatures have been flat for varying periods on the two sets.
Section 2
For this analysis, data was retrieved from Nick Stokes’ Trendviewer available on his website. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.
On several different data sets, there has been no statistically significant warming for between 14 and 22 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.
Dr. Ross McKitrick has also commented on these parts and has slightly different numbers for the three data sets that he analyzed. I will also give his times.
The details for several sets are below.
For UAH: Since July 1996: CI from -0.041 to 2.218
(Dr. McKitrick says the warming is not significant for 16 years on UAH.)
For RSS: Since December 1992: CI from -0.013 to 1.752
(Dr. McKitrick says the warming is not significant for 26 years on RSS.)
For HadCRUT4.3: Since May 1997: CI from -0.011 to 1.132
(Dr. McKitrick said the warming was not significant for 19 years on HadCRUT4.2 going to April. HadCRUT4.3 would be slightly shorter however I do not know what difference it would make to the nearest year.)
For Hadsst3: Since May 1995: CI from -0.009 to 1.715
For GISS: Since June 2000: CI from -0.008 to 1.403
Note that all of the above times, regardless of the source, with the exception of GISS are larger than 15 years which NOAA deemed necessary to “create a discrepancy with the expected present-day warming rate”.
Section 3
This section shows data about 2014 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, HadCRUT4, HadSST3, and GISS.
Down the column, are the following:
1. 13ra: This is the final ranking for 2013 on each data set.
2. 13a: Here I give the average anomaly for 2013.
3. year: This indicates the warmest year on record so far for that particular data set. Note that two of the data sets have 2010 as the warmest year and three have 1998 as the warmest year. This is all prior to 2014.
4. ano: This is the average of the monthly anomalies of the warmest year just above.
5. mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year. Note that this does not yet include records set so far in 2014 such as Hadsst3 in June, etc.
6. ano: This is the anomaly of the month just above.
7. y/m: This is the longest period of time where the slope is not positive given in years/months. So 16/2 means that for 16 years and 2 months the slope is essentially 0. Periods of under a year are not counted and are shown as “0”.
8. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.
9. sy/m: This is the years and months for row 8. Depending on when the update was last done, the months may be off by one month.
10. McK: These are Dr. Ross McKitrick’s number of years for three of the data sets.
11. Jan: This is the January 2014 anomaly for that particular data set.
12. Feb: This is the February 2014 anomaly for that particular data set, etc.
23. ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months.
24. rnk: This is the rank that each particular data set would have for 2014 without regards to error bars. Due to different base periods, the rank is more meaningful than the average anomaly.
Source | UAH | RSS | Had4 | Sst3 | GISS |
---|---|---|---|---|---|
1.13ra | 7th | 10th | 9th | 6th | 6th |
2.13a | 0.197 | 0.218 | 0.492 | 0.376 | 0.60 |
3.year | 1998 | 1998 | 2010 | 1998 | 2010 |
4.ano | 0.419 | 0.55 | 0.555 | 0.416 | 0.66 |
5.mon | Apr98 | Apr98 | Jan07 | Jul98 | Jan07 |
6.ano | 0.662 | 0.857 | 0.835 | 0.526 | 0.92 |
7.y/m | 10/0 | 18/3 | 0 | 0 | 0 |
8.sig | Jul96 | Dec92 | May97 | May95 | Jun00 |
9.sy/m | 18/6 | 22/1 | 17/7 | 19/8 | 14/7 |
10.McK | 16 | 26 | 19 | ||
Source | UAH | RSS | Had4 | Sst3 | GISS |
11.Jan | 0.236 | 0.260 | 0.508 | 0.342 | 0.68 |
12.Feb | 0.127 | 0.160 | 0.305 | 0.314 | 0.43 |
13.Mar | 0.137 | 0.213 | 0.548 | 0.347 | 0.70 |
14.Apr | 0.184 | 0.250 | 0.658 | 0.478 | 0.72 |
15.May | 0.275 | 0.286 | 0.596 | 0.477 | 0.79 |
16.Jun | 0.279 | 0.346 | 0.620 | 0.563 | 0.61 |
17.Jul | 0.221 | 0.351 | 0.544 | 0.551 | 0.50 |
18.Aug | 0.117 | 0.192 | 0.666 | 0.644 | 0.73 |
19.Sep | 0.186 | 0.205 | 0.592 | 0.574 | 0.81 |
20.Oct | 0.242 | 0.272 | 0.614 | 0.528 | 0.75 |
21.Nov | 0.213 | 0.245 | 0.486 | 0.480 | 0.66 |
22.Dec | 0.176 | 0.284 | 0.632 | 0.452 | 0.72 |
Source | UAH | RSS | Had4 | Sst3 | GISS |
23.ave | 0.199 | 0.255 | 0.564 | 0.479 | 0.68 |
24.rnk | 7th | 6th | 1st | 1st | 1st |
This section shows data about 2014 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, HadCRUT4, HadSST3, and GISS.
To see all points since January 2014 in the form of a graph, see the WFT graph below. Note that HadCRUT4 is the old version that has been discontinued. WFT does not show HadCRU4.3 yet.
As you can see, all lines have been offset so they all start at the same place in January 2014. This makes it easy to compare January 2014 with the latest anomaly.
Appendix
In this part, we are summarizing data for each set separately.
RSS
The slope is flat since October, 1996 or 18 years, 3 months. (goes to December)
For RSS: There is no statistically significant warming since December 1992: CI from -0.013 to 1.752.
The RSS average anomaly for 2014 is 0.255. This would rank it as 6th place. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2013 was 0.218 and it was ranked 10th prior to 2014.
UAH
The slope is flat since January 2005 or an even 10 years according to Walter Dnes. (goes to December using version 5.5)
For UAH: There is no statistically significant warming since July 1996: CI from -0.041 to 2.218. (This is using version 5.6 according to Nick’s program.)
The UAH average anomaly for 2014 is 0.199. This would rank it as 7th place. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.662. The anomaly in 2013 was 0.197 and it was ranked 7th prior to 2014.
HadCRUT4.3
The slope is not flat for any period that is worth mentioning.
For HadCRUT4: There is no statistically significant warming since May 1997: CI from -0.011 to 1.132.
The HadCRUT4 average anomaly for 2014 is 0.564. This would rank it as 1st place. 2010 was the previous warmest at 0.555. The highest ever monthly anomaly was in January of 2007 when it reached 0.835. The anomaly in 2013 was 0.492 and it was ranked 9th prior to 2014.
HadSST3
For HadSST3, the slope is not flat for any period that is worth mentioning. For HadSSTt3: There is no statistically significant warming since May 1995: CI from -0.009 to 1.715.
The HadSST3 average anomaly for 2014 is 0.479. This sets a new record. 1998 was the warmest at 0.416 prior to 2014. The highest ever monthly anomaly was in July of 1998 when it reached 0.526. This is also prior to 2014. The anomaly in 2013 was 0.376 and it was ranked 6th prior to 2014.
GISS
The slope is not flat for any period that is worth mentioning.
For GISS: There is no statistically significant warming since June 2000: CI from -0.008 to 1.403.
The GISS average anomaly for 2014 is 0.68. This sets a new record. 2010 was the warmest previously at 0.66. The highest ever monthly anomaly was in January of 2007 when it reached 0.92. The anomaly in 2013 was 0.59 and it was ranked 6th prior to 2014.
Conclusion
Due to new records being set for GISS, HadCRUT4.3 and HadSST3, the pause for these data sets has basically disappeared. At least that is the case for now. Should lower temperatures come in 2015, then the pause may resume in each case. However, all satellite records still show a pause of several years. Due to a relatively constant adiabatic lapse rate, this is very puzzling. Do you have any suggestions as to why there is this discrepancy? Of the three ways that I have given the rankings for 2014, which do you think is the most accurate? If you want me to do a different calculation for any of the six data sets that I have covered such as “What is the 2014 ranking for RSS assuming a tie if the anomaly is within 0.06 of the 2014 number?” please let me know.
January could be a interesting one I think. 2015 might show cooling across all regions. UK being lower than average already USA being lower than average etc.
I’m going to say Australia will bump up the global average……
Why? Aus is a minisucle landspace compared to other land spaces. Or maybe because of “adjusted” data by the BoM? If so, I’d say you are correct. The BoM, the CSIRO and the MSM are still spinning 2014 was the hottest evah, even though the MetO, WMO, NASA and NOAA say different (After retraction)! And RSS (Alarmists don’t like RSS as it contradicts their view) and now so too satellite data “says” different!
On what criteria can Australia be fairly descibed as a “miniscule” landspace?
It is about 6% of the total land area of the globe and whilst it is obviously not a continent that should dominate temperature series, it is sufficiciently large to have more than “miniscule” effect on global land based temperatures.
The UK on the otherhand is rather tiny so it should not contribute much to global temperature series. Prior to last winter (ie., Dec 2013 to Mar 2014), winter temps according to CET had cooled, as from 2000, by about 1.5degC. That is noticeable cooling and the UK has been unprepared for such conditions and it has caused much travel disruption since local councils had been advised that they would not need their gritting equipment nor need to stock pile salt etc because of Met Office’s predictions that global warming would lead to mild winters.
Last winter was mild, so I guess that the coolong since the start of the millenium is not quite so large, but this winter may be considered to be a cool winter so perhaps when this winter is finished, CET will continue to show, as from 2000, a fall in winter temps of about 1.5degC.
I do not know how the rest of Northern Europe is fairing on the winter stakes, but it appears that both the USA and the UK have been experiencing colder winters these past 15 or so years, and this is why people are mocking the Dr Vinner prediction that children just will not know what snow is.
richard verney
I calculate Australia land mass as 5.2% of earth (2,970,000/57,268,900 sq miles; 7,692,000/148,326,000 sq km).
5.2 % vs 6% is “about” a 15.4% error (0.8/5.2), but why quibble about accuracy in a science discussion.
Chip. 6% is plenty close enough when you can’t measure accurately the other 95%. Don’t be an a$$
Compared to the land mass of Africa, the Americas and Eurasia, Australia *IS* minscule in comparison!
Only if they fiddle with the data. Extra hot at the end of last year, but pretty mild down under for 2015 so far
UAH version 5.6 just came out for January. There was a small jump to 0.351 from 0.322 in December.
[snip – Epiphron Elpis is yet another David Appell sockpuppet.]
A rise of 0.029 C from one month to the next does not mean this will happen for the next 100 years. Check out the table to see how much anomalies can vary from one month to the next.
@Epiphron Elpis,
The jump = .029, or 0.29/decade, by your logic, assuming January 2015 is representative.
[snip – Epiphron Elpis is yet another David Appell sockpuppet.]
According to Dr. Spencer:
“The Version 5.6 global average lower tropospheric temperature (LT) anomaly for January, 2015 is +0.35 deg. C, little changed from the December 2014 value of +0.32 deg. C”
If you feel you know better, then feel free to take it up with him.
Shelf sitters are berated, because the Monckton of Brenchley algorithm finds its way back to before that anomalous anomaly starting in 1998 with the great el nino.
He cherry picked that start date they say, to hold up that end of the rope.
So I have noticed; maybe even noted for some time that the 1998 el nino, was immediately followed by the great crash.
Now from your first figure, I can see that there is a lot of up and down goes on between two points on your green zero trend line those being at 1998 and 2005.
So that begs the question. Just what is the average of that data between those two points on your zero line at 1998 and 2005.
My eyeball says it might also be near zero, but since you have the data, is that something you can extract ??
G
See the following. The trend is flat from October 1996 to the present. However it is also flat from October 1996 to April 2000 as well as more or less flat from April 2000 to the present.
http://www.woodfortrees.org/plot/rss/from:1996.75/plot/rss/from:1996.75/to:2000.3/trend/plot/rss/from:2000.3/trend/plot/rss/from:1996.75/trend
The bottom line is that the valleys around 1998 cancel out the peak. And that is why people cannot get away with claiming we are cherry picking if we say a slope of zero starts before 1998.
If you take HadCrut4’s series of monthly global surface temperature anomalies and their associated estimates of error, there’s been no detectable global warming since 1997.
http://4.bp.blogspot.com/-dROUUOsiXJg/VNIgTqnp4XI/AAAAAAAAF08/7o08QJ7isBw/s1600/h4.png
Thank you! It appears as if you are assuming an error bar of 0.16. In that case, we could also say there is a 14 way tie for first place with Hadcrut4.3.
[snip – Epiphron Elpis is yet another David Appell sockpuppet.]
According to Nick Stokes, the warming on Hadcrut4 is not statistically significant from June 1997.
Adelaide in South Australia had its coolest January in 10 years
It has been a cool January after the first week for most of Australia. It looks like it will still be above average, officially.
Only after adjustments!
To date this has been the coolest Summer I can ever remember (in Adelaide). I have never had to put on a jumper (sweater) to go to work before so any claim of a “hot” summer will only occur in the mind of the data manipulator. There have certainly been no “extra hot” days here so far. We have had a hot spells in Feb in the past so will need to wait and see but so far, cool indeed.
James it can’t possible be the cold after all this is the warmest decade ever, your need for a sweater this year must be just you are getting old! /sarc off At least in my case that could be true. I also know for a fact the last ten years have been warmest I have every experience, yet for my siblings that does not seem to be true, I think they would blame it on that I move to Arizona and they remain in Minnesota and North Dakota. Yes putting on a Jacket is a pain, we have do that here in winter, use to do that in the summer up north. Up north you don’t take said sweater off very often this time of year and often you have to cover it with a parka! I certain don’t miss that!
Australia Jan 2015 temperature is down. I found it easily on the bom.gov.au website a couple of days ago, and can’t find it now. It was offering calendar month in a drop down list and now appears to offer only year or season. Maybe I’m just looking in a different place. From memory the Jan max temp was down a bit and the Jan min was marginally the lowest Jan min in 15 years.
Anthony, in ref to the statement ‘“One reason for the perceived warming, Hager says, is traced back to a change in measurement instrumentation. He says glass thermometers were( was)sic replaced by much more sensitive electronic instruments in 1995. Hager tells the SZ:
‘For eight years I conducted parallel measurements at Lechfeld. The result was that compared to the glass thermometers, the electronic thermometers showed on average a temperature that was 0.9°C warmer.’
weren’t you running an experiment to make this comparison?
Here is more on what Klaus Hager had to say. He is a meteorologist of 44 years and lecturer at the University of Augsburg. He is considered an weather instrumentation and measurement. Warmist please avert your eyes.
CORRECTION……I meant:
“He is considered a weather instrumentation and measurement expert.”
don’t do that to me first thing in the morning…when I’ve only had half a cup!
Jimbo, your posts are always so informative.
If you added a correction factor of 0.9 C to the glass thermometer data prior to 1995 wouldn’t that literally erase global warming?
The glass has a slow response compared with the electronic, but let’s assume that it’s in between.
So 0,45 oK cab be subtracted, left over since 1850 0,35 oK as global warming and half of it can be attributed to greenhouse gases, nothing to be worried about.
In Germany.
No because you see it makes perfect sense to keep the current because it is accurate and lower the past by even more to give the electronics a good start. /sarc
Glass has a slow response? Compared to what? I’ve used glass thermometers to check the temperature of several areas I use as cold storage for dormant trees to be transplanted, and it only takes a minute or two for said thermometers to stabilize (meaning they don’t change after that), depending on the prior temperature they were at.
Is temperature so variable or wide-ranged it has to be measured every second? My experience indicates glass thermometers have a sufficiently quick response as to be a non-issue. It takes roughly 12 hours for a climate thermometer to swing between a low and a high temperature. That should be enough time to get the job done accurately.
(The only way there would be a true 0.9 degree difference in the two instrument types is if temperature were a lognormal function and it isn’t–It sounds more like a systematic bias and the electronic instruments weren’t calibrated properly.)
“””””…..
RockyRoad
February 3, 2015 at 1:31 pm
Glass has a slow response? Compared to what? …..”””””
Well they have to try and get two readings a day to be Nyquist Kosher if you happen to believe that the diurinal Temperature excursion is a pure sinusoid with a 24 hour period (which it isn’t).
So they gotta have thermometers at least ten times faster than that.
A more likely scenario is that they simply are / were using the wrong kind of mercury in glass thermometers. And incidently, it is the Mercury which has to respond; it is to be hoped that the glass doesn’t respond to any great extent, as that would make the blessed things highly non-linear.
But as to the wrongness of said thermometers. Mercury in glass thermometers which I have bought to make accurate temperature measurements of photographic processing chemicals ( for developing color film), are ones intended for chemistry type experiments, and they typically are calibrated for an immersion length (in the liquid being measured) of circa 76 mm which is also about 3 inches. Maybe I have that immersion length wrong, but there is one length for which those thermometers are calibrated.
So if the whole thing is just suspended in air, then that type of thermometer is going to read wrong anyway.
But I think the thermometers aren’t the problem.
Thermometers read the Temperature of the thermometer (in some way).
There is no assurance that the region of the atmosphere whose Temperature you wish to know, is being read by that thermometer.
But in any case, I don’t think it matters much what the thermometer reads, it is only representative of the exact location of the thermometer, and not much else.
g
Good point! To clarify for others: you make a thermometer and put it into ice-water. Label that point 0 C. Then put it into boiling water. Label that point 100 C. Of course this is at sea level and maybe other considerations. So the expansion of the glass is automatically accounted for but only for those two points. If you assume you can make 100 equal divisions, then different expansion rates could be a problem.
Hager’s experiment seems to disagree with NOAA/NCDC’s study and the adjustments they’ve been making.
ftp://205.167.25.101/pub/data/ushcn/papers/quayle-etal1991.pdf
“Do you have any suggestions as to why there is this discrepancy?”
Duh. The surface data has been corrupted.
That the surface record has been corrupted has been well documented by a number of WUWT articles.
The other reason is UHI. Surface records are mainly in places of increased human generated heat over time. Satellite records are all over the globe.
Yep. My mathematical talents are such that I decided not to join a Yakuza gang on the grounds that a major mistake would leave me only able to count to nine and a half, so looking at all those numbers just made me dizzy.
But when I saw “Do you have any suggestions as to why there is this discrepancy?”, my first thought was “Someone – and probably not the Yamaguchi-gumi – has fiddled the figures.”
rooter
You ask
No, nobody can because none of the global temperature data sets indicates a known physical parameter (see Appendix B of here).
However, that is merely your attempt to change the subject because you could not justify your assertion which I questioned by asking
Richard
Like the UAH record then. Every temperature index except RSS v3.3 has been corrupted.
rooter
You say
It seems unlikely that the RSS v3.3 data set would be unique in this manner. Please say how you know RSS v3.3 has not been corrupted.
Richard
Richard says:
“It seems unlikely that the RSS v3.3 data set would be unique in this manner. Please say how you know RSS v3.3 has not been corrupted.”
RSS most likely biased yes. Because that is the outlier. Diverges after 1997: land thermometers:
http://woodfortrees.org/graph/crutem4vgl/from:1997/offset:-0.26/plot/crutem4vgl/from:1997/offset:-0.26/trend/plot/rss/from:1997/offset:-0.10/plot/rss/from:1997/trend
sea temperature:
http://woodfortrees.org/graph/hadsst3gl/from:1997/offset:-0.26/plot/hadsst3gl/from:1997/offset:-0.26/trend/plot/rss/from:1997/offset:-0.10/plot/rss/from:1997/trend
UAH satellites and radiosondes:
http://i.imgur.com/urp8dpp.png
Can Richard tell what temperature index RSS actually matches?
rooter
My recent post in this sub-thread has appeared out of turn. Sorry. It is http://wattsupwiththat.com/2015/02/03/only-satellites-show-pause-wuwt-now-includes-december-data/#comment-1853350.
Richard
[snip – Epiphron Elpis is yet another David Appell sockpuppet.]
Nor the lack of one, obviously.
“Do you have any suggestions as to why there is this discrepancy?”
We all know why.
There may only be one solution to the problems raised by satellite data.
The satellite data must be homogenised immediately.
The temperatures for the first 20 years of satellite monitoring do no fit the models. Therefore, they must be adjusted downwards.
Surely the satellite data should be calibrated to the Earth stations /sarc.
It still dumbfounds me how ANYONE can believe all those adjustments to all the Earth station measurements around the world all produce adjustments in the SAME direction, cooler past. This alone is enough to cry fward.
As soon as the “right” people get control of the satellite data, all will be well.
If there is no statistically significant warming for all those years, how can it be said that “the pause” is over?
This has to do with how you define the pause and what arguments you care to get into. If there is a positive trend that is statistically insignificant, it can be argued either way. So Werner picked a more strict definition of the pause (nominally zero or negative trend) to completely avoid any accusations of exaggeration. By doing so, the conclusions are more sensitive to years where you have extreme endpoints, because statistical significance is not considered. Therefore Werner’s analysis is more likely to erase pauses when a near record has been reached, but also can quickly bring the pause back after passing that near record if temperatures fall below the average afterwards. I share your sentiment that considering statistical significance would be the appropriate way of looking at it, but it opens you up to allot of argument from the alarmist side when you call it a pause and there is a positive trend, even though it is statistically insignificant from zero.
Absolutely.
As soon as the next La Nina crops up, the pause will be back on with a vengence. Say if that happens in 2016, we may have no pause during 2015, and then suddenly, in 2016, a pause going back approximately 20 years. This is a problem of dealing with short time series.
What is lost in these discussions is that the Sat temp series show only a step change in and around the 1998 super el nino, and the temperatures are essentially flat from 1979 through to 1996/7 and are again flat post 1998 to date. This is not what one would expect if CO2 drives temperatures.
Thank you! And statistical significances were covered in section 2. They show no statistically significant warming for over 18 years on the average. Then of course the satellites show no pause.
Well Hmmm, it is clear that you simply have not read any account of exactly HOW the Monckton Shelf Graph is produced.
The MSG is produced by calculating the straight line trend between the two end points, using the very standard statistical mathematics discipline for doing that. That is done even if the two end points are last month and the month before. Or if the are 18 years and three months apart. The entire data set between the segment end points is used to calculate the trend line.
Then the exact same set of numbers, and the exact same statistical mathematics regimen, is use to compute the standard deviation for that trend line.
So whether the trend line slope is zero or not, is ALWAYS in comparison with the statistical significance as represented by the standard deviation.
So there is NO BASIS for saying that statistical significance is not considered.
And the algorithm automatically pegs the end point of the data set at the most recently released data; in this case of the RSS data set.
The reason that the most recently released RSS data is used as the end point, is that it literally IS the end point. There is NO DATA from beyond the most recently (last month) data set.
The beginning point is NOT selected by Lord Monckton.
The algorithm itself finds the start point by determining that any data set, that is one month or more longer, has a NON-ZERO trend that is statistically significant.
So bearing in mind how the MSG game is played; you might want to rephrase your post.
g
Determining how long the pause is can be easily done with Nick Stokes’ site here:
http://moyhu.blogspot.com.au/p/temperature-trend-viewer.html
Put the blue marble on December 2014. Put the red marble on January 1996. Note the slope which will be positive at this point. Then use the small red > that allows you to advance a single month at a time. Click away and stop when the slope is negative.
George says, “The beginning point is NOT selected by Lord Monckton. The algorithm itself finds the start point by determining that any data set, that is one month or more longer, has a NON-ZERO trend that is statistically significant.”
========================================
Indeed claims of cherry pick are not valid at all.
Of course what really matters is that the earth is not warming at anywhere close to the rate predicted by the IPCC climate models, and even more then that, the predicted disasters are failing to materialize, yet the known benefits are in fact occurring.
OK, I’m going to ask something which I’ve been interested in for a while. It seems to me that there should be a relatively simple way to resolve the satellite is better vs land/sea is better argument.
Satellites are accepted as accurately measuring a broader spectrum and more comprehensive area of the Earth than station/buoy/etc measurements used to measure surface temps. As I understand it, those promoting the land/sea record vs satellite argue that the satellite measures the temperatures at an altitude which is not comparable with our experience at ground level.
Now, ignoring for a moment the many arguments which can be raised in objection to this, my question is: how reliably can we extrapolate the satellite readings to land/sea temperatures?
Satellites do not measure temperature! Nothing does!
Strain and temperature, my two most hated measurements.
People keep saying this, yet the scientific community as a whole generally accepts that temperature is what is being measured.Please tell me you’re not just playing semantics?
xyzzy11, sort of semantics.
There is no direct measure for temperature (or material strain). You need to use some external device to convert a physical property into electron flow to be able measure them. Both are very hard to verify, there aren’t many variable temperature sources available. You just can’t dial up 65.25 F and verify your calibration.
Strain gauges are worse, they rely on the bonding skills of the operator. And can’t be verified before or after a test. I tell the ME’s that strain measurements are faith based, use them accordingly.
xyzzy11, he’s just playing semantics.
Can anyone can think of a measurement of any sort, which somehow ports an indisputable result into our brains without any possible error. There ain’t no such thing.
Let’s say you want to know the length of a board. You get your tape measure and take a measurement. Do you now “know” the length? Not with infinite certainty and precision, you don’t.
– Maybe the tape measure is marked incorrectly. It is certainly the case that the tape measure is marked with only finite accuracy.
– Maybe it is a cold day and there has been thermal contraction
– Maybe you mis-read the number. It is certainly the case that the measurement was taken with only finite precision.
– Maybe you are schizophrenic and hallucinated a number.
– Maybe someone else took the measurement and either lied or made a reporting error.
– etc.
There is always and necessarily a chain of inferences that connects the data occurring out there in reality, with some abstraction of its meaning in our heads. That is not a complaint or a criticism of science. It simply means that Patrick is playing semantics.
“””””……
Paul
February 3, 2015 at 5:00 am
Strain and temperature, my two most hated measurements……”””””
If you think measuring “Strain” is a pain, just try your hand at measuring “stress” instead.
As for measuring Temperature. Just what is it. of which. you want to measure the Temperature ??
Paul:
Well you may not be able to, but I can. 🙂
It’s called a Peltier device. I use this system to calibrate my temperature sensors.
In any case, just because some measurements are more challenging that others doesn’t mean they aren’t useful to be measured.
Satellites cover the most area. Earth stations have very poor coverage and do not cover the oceans at all well, which is 70% of the Earth. This lack of coverage is large enough for 97% of crimatologists to make stuff up.
Ever heard of sampling?
You could of course tell what is the necessary coverage. One measurement for each m2? Km2? And what kind of criteria do you use for making your claim?
“Ever heard of sampling?”
Yep, I sample the outside temp on my drive to work. Today I saw it go from -2 at my house in the country, to +3 as I got to work at the edge of the city. A distance of 8 miles, over 18 minutes, and no large elevation change.
Except when you test for correlation between satillite and surface you find a remarkable agreement.
An agreement that IMPROVES as you move below the TLT.
[snip – see reply previously, try not to blow a head gasket – Anthony]
“””””…..
rooter
February 3, 2015 at 5:25 am
Ever heard of sampling?…..”””””
Ever heard of the Nyquist Sampling Theorem; the golden rule of ALL sampled data systems.
Namely you must sample your variable at no greater interval than one half period of the HIGHEST frequency component in your “Band Limited Signal.”
For example; if you have a band limited signal with a maximum frequency of one full cycle in 24 hours; which means it is a 24 hour period PURE SINE WAVE function, with NO higher frequency harmonic components, then Nyquist says you must space your samples no further apart that 12 hours.
But if your diurnal Temperature cycle happens to be pseudo saw toothy, so it rises rapidly in the morning at sun up, and decays more slowly due to radiative cooling after sun down, Then you have at least a second harmonic component with a frequency of one cycle in 12 hours, so now you have to place your Temperature samples no further apart than six hours, and most Temperature stations don’t do that.
So if you only sample the earth Temperature at 12 hour intervals, then The Nyquist theorem says that your reconstructed function (the temperature) contains in band aliased signals folded all the way back to zero frequency, and the zero frequency signal is in fact the AVERAGE of the temperature over the day.
So twice a day Temperature sampling cannot even give you an accurate average Temperature for the day at that location.
Well when it comes to spatial sampling, the climate scene is a joke; at least it is for ground based.
Hansen claims you can space your thermometers 1200 km apart and get “good data”.
Well I can believe you can get good data that way. But it isn’t ANY good at all for determining the average Temperature of the earth.
In the SF Bay area, we frequently get spatial Temperature cycles with periods as short as 5-10 km.
So land based spatial Temperature sampling is a total farce.
No wonder none of the data sets agree with each other or with any of the models.
“my question is: how reliably can we extrapolate the satellite readings to land/sea temperatures?”
An assumption is required: that the lapse rate is constant with respect to changes in temperature. Then a change in satellite measured temperature (within error) equals a change at the surface. But really lapse rate is not constant. It changes with moisture content. An increase in near-surface temperature means a lower lapse rate near the surface which will bring surface temperatures closer to the satellite temperature (cooler). This negative feedback is ignored by modelers as it tends to jeopardize their income.
Well there is also an assumption that the bottom end of the atmosphere; the start of your “lapse rate” is somehow identical to the actual ground surface / ocean surface Temperature. No real basis for believing that, given the multiplicity of mechanisms determining the near surface air temperature.
“It seems to me that there should be a relatively simple way to resolve the satellite is better vs land/sea is better argument.”
There is. Just ask NASA before there was money to be had riding the CAGW train.
This article was in April 1990, and this quote is from the first paragraph:
Think about it: What changed between 1990 and now?
If one rreviews the satellite data, the most obvious conclussion is that there is no CO2 induced warming to be seen in the record. No CO2 signal can be seen above measurment errors.
Temperatures were flat between launch (1979) and through to say 1996/7, and are flat post 1998 to date.
The only warming is a one off step change in and around the 1998 super El Nino. There is no evidence to suggest that that event was caused by anthropogenic CO2 emissions. It would appear to be a natural event.
The satellite data (if it can be relied upon, and the entire purpose of launching these satellites was to provide more reliable data than the land based temperature observatories) disconfirms cAGW, and strongly supports the contention that all warming is naturally driven by solar/clodiness/oceanic events.
A big problem with the satellites is of course their divergence. Much more than the surface indexes.
http://www.woodfortrees.org/graph/rss/mean:12/offset:-0.10/plot/uah/mean:12/plot/rss/to:1997/offset:-0.1/trend/plot/uah/to:1997/trend/plot/rss/from:1997/offset:-0.1/trend/plot/uah/from:1997/trend
While the satellites are different, both still show pauses of at least 6 years. Others have pauses of less than a year.
Yes, satellites have issues, that does not automatically make surface temperatures good. Actually the significant issues satellite measurement have only proves how bad the surface temps are since with all of their issues satellites have significant advantages over surface readings. A few being coverage, greatly reduced opportunity for human error.
Obviously you have heard of sampling but are happy as a gopher to ignore the margins of error inherent. At any rate it is irrelevant since sampling does not create or change data, only statistical output from empirically collected data. Proper sampling would never have ignored 70% of the target area in determining its sample points. In surface measurements adjustments such as in-filling are used to CREATE DATA. Sorry for the caps, it is simply an assist to those with thick skulls.
Inconsistent data across all the various data-sets and how historical data is constantly changing indicates global temperature is a mythical entity, or maybe better put a political entity. Whatever it is, it is certainly nothing we can place confidence in being correct to within hundredths of a degree.
Alx says:
“In surface measurements adjustments such as in-filling are used to CREATE DATA. Sorry for the caps, it is simply an assist to those with thick skulls.”
Of course data is created when sampling. When calculating an average of the measurements data for the missing areas is created. The average of the measurements represent the missing areas as well. So you must choose what method for creating that data is the best data creation method. Create the data for the areas with no measurements from the hemispheric mean or creating the data from the nearest areas with measurements.
The adiabatic lapse rates for wet and dry air should balance out after a while. And unusual circumstances such as temperature inversions should also balance out so in the long run. So while satellites use a different method to determine temperature than the surface thermometers, the results should be similar with regards to relative changes over 10 or 18 years. That is with no fudging!
Actually the folks who do RSS say the land record is more accurate
I understand they have their biases too. Do the UAH people agree?
Brozek:
Of course there must be difference between lower troposphere and surface when starting with a ninjo and covering only 6 years. Happens every time you try that.
Mosh … They didn’t until recently. I wonder why?
Good point. However January 2009 is in a La Nina region.
Mosher, such an assertion deserves a link.
Perhaps they are pushing for newer, better satellites? There is nothing done today by the European and U.S. governments that doesn’t have a political and monetary agenda attached.
Eighty nine percent of stations surveyed by Anthony Watts do not meet a criteria of less than one degree C accuracy. The worst offenders are located in cities where UHI and poor citing near heat sources poses problems. These are the very ones being used to homogenize data for the U.S. How can you possibly say things like the earth has warmed 0.7 degree C since the beginning of record keeping. The band of uncertainty has to be large enough to drive a truck through even in this very month’s output.
If the land record is better then RSS data is made up. That idea is just not credible. Fraud in the land data base is practically a given.
That depends hugely on the time frame. For the last 5 years for example, they could not be more different. See:
http://www.woodfortrees.org/plot/gistemp/last:60/plot/gistemp/last:60/trend/plot/rss/last:60/plot/rss/last:60/trend
You can’t. The folks who use satellites to measure by proxy, a Temperature, take great pains to point out that what they are inferring from their proxy determination is the Temperature of Oxygen molecules in various levels of the troposphere.
So they aren’t even measuring surface Temperatures, is my understanding.
Dr. Roy Spencer has explained the methodology in great detail. Those in the know, say the proxy is quite trustworthy. I’m not in the know, but I take their word for it.
But if the problem in your mind is translating the Temperature reference IN THE SATELLITE , to some ground thermometer that YOU trust; that really is a non issue.
The NIST is well versed in the metrology of Temperature, and anybody who needs to can determine Temperature (of their thermometer) very precisely.
Surface data needs humans to collect, not satellite data.
satillite data is massaged, adjusted, and stitched together by humans.
humans who refuse to release their code.
REPLY: Careful there buddy, NOAA/NCDC has the same SAME problem. Their code to produce the USHCN and GHCN datasets (which BEST relies on) has not been released to my knowledge. They list procedures in papers, but I’ve never seen their actual code. USHCN and GHCN is also “massaged, adjusted, and stitched together by humans.”…your point is therefore …ahem, denied. – Anthony
wrong. The code is out there.
plus we use daily data
no adjustments
no adjustments
Au contraire. BEST throws every temperature record it has into a Cuisinart, mincing long, low-frequency informative temperature records into a homogenized smoothie mash of sub-decade segments devoid of any low-frequency nutritional value. In the Fourier Domain, BEST is adjusting the signal to noise ratio down to zero.
Mosher keeps saying the code is out there.
So, there some code run on an obscure program that one needs to have a university-level computer server in order to operate properly. At the same time, we have no idea if this code is appropriate or is just completely biased as it to be.
The last time I worked with a very large dataset, it took over 5 hours of run-time on my computer (overnight of course) to reach a result. 20 different attempts at verifying the result I was trying to replicate took over one month of personal time and I don’t think I want to take that risk of a dead computer once again.
So, one could attempt to run a code and a 400 Mb datafile. Or one could simply note that the code seems to just adjust the Raw temperatures up ALL of the time and it is does so by a LARGER and LARGER amount every time the dataset is updated every few weeks.
And then, even if one found a way to run the program so that its’ “biases” were removed and we ended up with the “true temperature signal”. Mosher and Nick Stokes and Zeke Hausfather would just berate your one month of personal time effort while risking your $1,000 working computer, because you did “something” wrong in your analysis. I have seen it several times now.
That is just a losing proposition.
And it is Mosher’s responsibility to justify what his numbers are and how his code works and provide a simple to use format of the data so that we are not subject to destroying our $1,000 computer and losing one months of personal time just so that he can tell us what we did was “wrong” anyway.
Raw data, adjustments required in a simple format – not that hard.
Otherwise, I look at as a “marketing effort” for a bad theory. its funny that climate science keeps complaining how they are bad at communication. That is completely wrong. They are the best “marketers” of a bad theory in the history of science. How else does a world waste 0.5% of GDP and risk larger unemployment just because of a theory that appears to be at least 50% wrong and 100% wrong in terms of harmful impacts.
LOL, hell Mosh, much of the NCDC data is simply made up. And adjustments, why those are in abundance.
Link? Address? Phone number? You claim to know something the rest of us don’t, so let’s have it. No secrets among friends.
Of course there is a difference between surface measurements and the lower troposphere. Main difference is the reponse to Enso. Higher temperature in the troposphere during ninjos and lower during ninjas compared to surface. Easy to see:
http://www.woodfortrees.org/graph/gistemp/from:1979/offset:-0.35/mean:12/plot/hadcrut3vgl/from:1979/offset:-0.26/mean:12/plot/rss/offset:-0.10/mean:12/plot/uah/mean:12
Fair enough. But also note that RSS ranked 6th for 2014 and UAH version 5.5 ranked 7th.
UAH 5.5 ranked 2014 7th. 5.6 is ranked 3rd.
Interesting. I did not know that. Well, then you have identified how the MSU/AMSU indexes changes much more than the surface indexes.
Note that the difference between 3 and 7 is only 0.061 on version 5.5.
Indeed Werner, the satellites are not close to showing 2014 as the warmest.?w=1280&h=790?w=1280&h=790
The lapse rate is not so constant, because an increase of greenhouse gases warms the surface and cools the tropopause. The satellite datasets do not completely exclude middle and upper levels of the troposphere. And if greenhouse gases warm the surface where/when the local lapse rate is too low for convection past a thin layer above the surface, then the lower troposphere above that layer is not warmed.
Donald,
Please prove those assertions.
My conclusion is that GHGs allow upward leakage of radiation to space from within the atmosphere which reduces the amount of kinetic energy returning to the surface in adiabatically warmed descending air.
That would produce a reduction in surface temperature BUT at the same time the greater amount of kinetic energy carried by GHGs offsets that reduction for a net zero effect on average overall temperatures.
It is the power of convective overturning that adjusts the up and down heat flows so that radiation to space from within the atmosphere plus radiation to space from the surface always balance out in such a way as to leave energy radiated out to space equal to radiation coming in from space.
The satellites measure the net overall stability of the system as a whole (with small variations above and below the mean) but surface measurements are heavily skewed by short term local and regional variations that are mostly driven by ocean cycles responding to solar changes that affect global albedo via cloudiness variations.
Surface measurements are also skewed by a hopelessly inadequate system of adjusting for UHI plus many other incompetent adjustment procedures.
Donald, the only way the lapse rate can be ‘too low for convection past a thin layer is if the temperatures near the surface are the same as or lower than the temperatures higher up. Therefore, not warmer. This means from your argument that the satellites will over-estimate the temperature at the surface as they assume a normal lapse rate.
The satellites don’t assume a lapse rate – they measure atmospheric temperature, and they do so at more than one level of the atmosphere. There are datasets that are nominally for the lower troposphere, the middle troposphere, and the lower stratosphere.
Have a look at, for RSS:
http://www.remss.com/measurements/upper-air-temperature#RSS%20Sounding%20Products
Especially Figures 1,5, and 6.
And they do measure absolute temperature. For example, the global lower troposphere temperature map at: http://images.remss.com/msu/msu_data_monthly.html
The Lapse Rate must be increasing for the surface to be increasing faster than the lower troposphere. The rate by temperatures decrease as one moves higher in the atmosphere. The wet lapse rate is 6.5C per 1.0 km rise. In the satellite era, the lapse rate must have increased to 6.6C per km given the difference in the surface and the troposphere.
This is opposite to what global warming theory/the IPCC/climate models predict. The Lapse Rate is supposed to decline to 6.3C per km in the long-run.
The climate models have the lapse rate feedback at -0.7 W/m2/C while it appears to be in the +0.25 W/m2/C range instead.
Just say that it is another one of the feedbacks that the theory is getting wrong (or the surface temperature records have been fiddled with by at least 0.35C in the satellite era). Water vapor and clouds are also wrong apparently. That only leaves the Albedo feedback as being correctly predicted in the theory.
“(or the surface temperature records have been fiddled with by at least 0.35C in the satellite era).”
/////////////////////////////
That is the most likely explanation.
Whilst trees are not thermometers, one should not overlook the fact that Michael Mann’s/Briffa’s trees were not showing warming in the satellite era! This suggests that one explanation behind the divergence problem was that the land based thermometer record had been corrupted high (whether by UHI, station drop outs, homogenisation etc, or otherwise just simply unreliable).
richard verney says:
““(or the surface temperature records have been fiddled with by at least 0.35C in the satellite era).”
/////////////////////////////
That is the most likely explanation.”
verney must then assume that the indexes are correct. A bit hard to digest when we know how the satellite indexes diverges. And how the different versions change the result.
But if it really is so that verney is sure the TLT-indexes are correct, then his conclusion that the surface records have been fiddled with assumes that the models are right!
Models trumps observations for verney there.
Donald says… “The lapse rate is not so constant, because an increase of greenhouse gases warms the surface and cools the tropopause.”
————————————-
I thought the CAGW theory expected the opposite.
BTW the lapse rate is never constant in most any local area. I regularly drive up to my home from an area 2700 feet lower. The T drops and THEN warms at a higher elevation several times
Reblogged this on The Grey Enigma.
Why ?? it is here for everyone to read.
UHI boys and girls. I live in rural Ohio just 1 mile outside of a small town of 6,000 people. Flat farmland for miles around yet on really cold mornings in January and February it can be as much as 12 degrees warmer in town than just one mile outside of town. Scientists that say UHI is negligible or a 6,000 person town doesn’t have UHI need to get out of their office chairs and do some actual field work. Tell Gavin to come to where I live this week and measure temperatures at my house a mile outside of town and in town at 6:00 AM, as UHI is in full effect. Gavin will not, he has no time for field work as he’s too busy adjusting for TOBS as it matters more than UHI in his eyes.
How many rural thermometers are in these little towns of 6,000 people? Even they are being effected by UHI. Our little town didn’t exist in 1880, a railroad junction was added and it slowly grew with about 1,000 people in 1900 to 6,000 today. It is time for Gavin to do some field work, as a gov’t employee he is being paid by us the taxpayers. Earn your paycheck.
And if you move that thermometer out of the village you will get lower temperatures.
UHI was discovered more than 100 years ago. So you are a little late for that game.
rooter
The romans were aware of UHI 2000 years ago. The nobility moved to the mountains during the summer to avoid it. When Rome burnt down Nero was entreated to rebuild it to mitigate the effects of building by ensuring narrow alleys to create shade.
tonyb
Huh? Gavin and his crew say UHI is negligible. 12 degree difference at 6:00 AM between in town and 1 mile outside of town is negligible? This is a little 6,000 person town with farmland around it, not a 600,000 person city with suburbs surrounding it. UHI is huge even rural places that Gavin and yourself say it doesn’t exist. Who considers a 6,000 person town, URBAN? You call it rural farmland.
In reality, most dry-land surface data is subject to human influences. “Rural” data is not free of human influence and very probably includes anthropic effects that are harder to identify and quantify than the UHI effect. “Rural” stations are primarily in agricultural areas. That means the data is affected by land clearing, vegetation cover changes, possibly irrigation effects, and who knows what else. Wild land stations are rarely broken out for separate analysis, and even many of those can be subject to anthropic effects; clear cutting near a station will have very definite local effects for example.
Indeed tonyb. UHI was not invented now.
40% of all stations exist in places were the human population is less than 1 person per sq km
People don’t live in airports either, though it can feel like it.
Speaking of which, how is BEST treating this study which indicates the SNOTEL sites may have read 1 degree K too high after upgrades from 1995-2005! Were the step changes for these 700 sites caught and properly adjusted downward? Have you had a chance to form an opinion about their claims?
http://onlinelibrary.wiley.com/enhanced/doi/10.1002/2014GL062803/#Survey
Ignore the exclamation…. none intended. (sigh)
“Have you had a chance to form an opinion about their claims?
Yes.
But if the sensor is placed too close to say, a black top highway in that “1 person per sq km” area it would still read somewhat high, no?
And artificial heat sources are just one per station.
30 to 40 percent of stations are not used in monthly data base graphics. The number of stations have also been dramatically dropped, meaning the area covered by the remaining stations is larger. When over 30 percent of the reduced stations is not used, then the area for each individual station increases even further, up to 1200 K. Every time the raw data from all reporting stations is used, both the anomaly and the T are reduced.
David A says:
“30 to 40 percent of stations are not used in monthly data base graphics. The number of stations have also been dramatically dropped, meaning the area covered by the remaining stations is larger. When over 30 percent of the reduced stations is not used, then the area for each individual station increases even further, up to 1200 K. Every time the raw data from all reporting stations is used, both the anomaly and the T are reduced.”
Just wrong. The fact is the opposite:
http://i.imgur.com/bhlyDO9.png
Please detail what your graphic shows. The large number of dropped stations are well known, and not shown in your graphic. https://www.youtube.com/watch?feature=player_embedded&v=58mDaK9bH5o
here are the changes to the global record since 2001.
https://stevengoddard.wordpress.com/2014/10/29/creating-a-record-global-temperature-gavin-style/
Here is some US changes from dropped stations..
https://stevengoddard.wordpress.com/2014/10/20/ncdc-turns-americas-17th-coldest-year-into-an-above-average-year/
What is done to the arctic by replacing water surface with land data is certainly warming…
http://oi46.tinypic.com/1zpheme.jpg
This happens at both hemispheres and is shown here…
Neither UAH or RSS are close to the “warmest year ever…
Here is some more adjustments to the global data sets..https://stevengoddard.wordpress.com/2015/02/04/gavins-spectacular-data-tampering-in-the-1990s/
Of course before the really went to town on the 1990s, they had a 1940s problem…
From: Tom Wigley
To: Phil Jones
Subject: 1940s
Date: Sun, 27 Sep 2009 23:25:38 -0600
Cc: Ben Santer
It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”.
But even with all the hype, we are still below Hansen’s zero emissions scenario C
https://stevengoddard.wordpress.com/2014/10/15/giss-still-below-hansens-zero-emissions-scenario-c/
Oh brain clicked in 12 degrees I hope is F not C ( well it is 3.40 am pt) < 6C between farmland and a small town center I can understand.
How about we compare old NCDC/GISS/HADCRUT/BOM values (ie before the latest round of adjustements) with the Satellite data and the current values for those data sets.
I bet the old ones were closer to the Satellite data than they are now.
Which satellite data? Which version?
it’s all down to the oceans now!
****The study puts a widely reported “hiatus” in global surface air temperatures since 1998 into context…
Scripps Institution of Oceanography: Distinct Rise in Global Ocean Temperatures Detected
Comprehensive view of world oceans afforded by sensor network reveals the ongoing and steady rise of global climate system heat content
Embargoed By:
NATURE CLIMATE CHANGE FOR RELEASE: FEBRUARY 2, 2015
Researchers led by Dean Roemmich, a physical oceanographer at Scripps Institution of Oceanography, UC San Diego, found that the top 2,000 meters (6,500 feet) of the world’s oceans warmed at a rate of 0.4 to 0.6 watts per square meter (W/m 2) between 2006 and 2013. The rate translates to a warming of roughly 0.005° C (0.009° F) per year in the top 500 meters of ocean and 0.002° C (0.0036° F) per year at depths between 500 and 2,000 meters.
For perspective, Roemmich noted that the heat gain was the equivalent of adding the heat of two trillion continuously burning 100-watt light bulbs to the world’s oceans…
“When we measure globally and deep enough, we see a steady rise in the earth’s heat content, consistent with the expected greenhouse gas-driven imbalance in our planet’s radiation budget,” said study co-author Susan Wijffels of Australian research agency the Commonwealth Scientific and Industrial Research Organization (CSIRO)…
***The study puts a widely reported “hiatus” in global surface air temperatures since 1998 into context…
https://scripps.ucsd.edu/news/distinct-rise-global-ocean-temperatures-detected
2 Feb: NYT Dot Earth: Andrew C. Revkin: A Fresh Look at the Watery Side of Earth’s Climate Shows ‘Unabated Planetary Warming’
A fresh analysis of thousands of temperature measurements from deep-diving Argo ocean probes shows (yet again) that Earth is experiencing “unabated planetary warming” when you factor in the vast amount of greenhouse-trapped heat that ends up in the sea….
The study, “Unabated planetary warming and its ocean structure since 2006,” was published today in Nature Climate Change. [I’ll add a direct link when there is one.]..
http://dotearth.blogs.nytimes.com/2015/02/02/a-fresh-look-at-the-watery-side-of-earths-climate-shows-unabated-planetary-warming/?_r=0
So the ocean’s eating the heat. As one would expect. Trillions of light bulbs–now there’s an analogy I can relate to! And those thousandths of a degree (is that really reliably measurable?) are lurking, waiting for a chance to jump back out of the ocean and fry us all! Unabated! All those light bulbs! We’re all gonna die!!
And just what sort of light bulbs are those. The ones that I use turn more than half of the energy I supply them with into EM radiation; NOT heat; so you would need at least 2 trillion light bulbs.
A trillion here, and a trillion there, and pretty soon you are talking about a lot of light bulbs.
What’s much more likely to jump out of the depths is cooling from that frigid 90% of the oceans’ volume that is 3C or below. Brrr.
For all intents and purposes, the oceans are an infinite heat sink so if they absorb the “excess” heat from the air, what is the problem?
\Infinite is a big word, Werner. They can act as a very big damper on atmospheric average temperature, but in the end, the oceans warm too. The timescale should be lengthened though. Any numbers?
Let us assume for discussion sake that the deep ocean warmed 0.1 C in 60 years. Then it would take 6000 years to warm by 10 C when the increase from 3 C may start to affect us. Regardless what your views are regarding CAGW, we do not have enough fossil fuels for this amount of heating.
But – if you assume that somehow, the ocean waters ALL (every kg of them!) could be uniformly heated up by 1/10 of one degree even in 1000 years, that greater temperature cannot be transmitted (the energy transferred) to heat up any other body (such as the atmosphere or the land) by any temperature greater than 1/10 of one degree.
True enough. And even if the whole ocean warmed by 10 C in 6000 years, it would not warm the air since the air is still warmer than 13 C. Of course I am talking about averages here and the details can be very different on different parts of Earth.
So, the top 500 meters of the ocean are warming at the rate of .9 degrees F per century? Are we sure that we want to disembowel the world’s economies for that, especially since we obviously still do not know the cause of the warming?
Also, do you have any way of proving that your numbers have not been TOBSified?
Ken, I somehow hope that TOBSified classification disappears, way to close to my name.
Revkin’s piece misses two points, both fatal to his thesis. First, the increasing ocean heat measures by Argo is less than the models predict, another way to falsify them. Second,Trenberth’s speculation that an increase in deep ocean (below 2000 meters, where Argo does not measure) heat explains the pause suffers from two logical defects. First, why the ocean uptake change coincident with the pause (implicity admitting natural variation which is fatal to IPCC attribution). Second, there is no mechanism by which such heat gets so deep without first being observed in the overlying layers. And it wasn’t.
Well yes there is.
Energy penetrates to great depths in the ocean as Electro-magnetic radiation, and it can go all the way to the bottom, or until it is absorbed by some non fluorescing (non radiative) process, which will convert it finally to “heat” (noun). And with all the deep sea critters that are also light bulbs, who knows how far the radiation can be carried down ??
So yes “heat” may not be propagating to the depths, but EM radiant energy certainly is.
G
Anyone who swallows statements about the oceans warming is gullible beyond redemption. Just what relyible measuring system do we have of accurately determining the heat average heat content of the entire oceans of our planet? Even the terminology should have you cringing “watts per square meter to a depth of..” Heads up, if we’re measuring to depth, it’s no longer area, it’s volume and that means it should be watts per cubic meter!!!
Which means the amount of energy is grossly under estimated or the warming is greatly exaggerated. Was the rate of warming equal at all depths to reach the lowest level or did it vary with thermolines and flows in the ocean?
Seems to me that some very basic questions and assertions were made improperly.
Even in the Argo era, there is much to be wary of in claims of actually knowing the energy content of the world’s oceans. But the thing that consigns the sweeping conclusions of Roemmich et al to the bin of pseudoscientific alarmism is the fact that “trends” observed over seven years are virtually meaningless, given the presence of much longer-period natural temperature variations.
When will they go to work adjusting the errors out of the satelite measurements? It’s is getting out of hand, feeding pseudoscientific trolls like Watts just the amunition they need to hide the truth!
You forgot the Sarc tag, unless of course you really are that stupid.
I don’t think cloud effects on the satellite measurements are fully understood. With water vapor in the atmosphere increasing, perhaps the satellites need to be re-calibrated every decade or so. Major adjustments were made in the 1990s, but nothing since.
A 0.06 anomaly with XX.X data. This is an empty statistical construct.
I was reminded recently of the experiment Anthony did with latex paints vs white wash that helped to kick off his project to check the status of the ground based network.
In the old days, the stations were painted with white wash, but over time this was changed to latex based paints. His experiments showed that white wash did a better job of reflecting infra-red frequencies.
The RSS data shows the relative temperatures better than the land/ sea measurements which have been homogenized ,reanalyzed and readjusted by various obscure algorithms so as to be highly questionable.
The RSS data shows the peak in the millennial solar cycle in 2003 and a cooling trend since then.
http://www.woodfortrees.org/plot/rss/from:1980.1/plot/rss/from:1980.1/to:2003.6/trend/plot/rss/from:2003.6/trend
For forecasts of the timing and amplitude of the coming cooling based on the important 1000 and 60 year +/- periodicities in the temperature data and using the neutron count as the most useful proxy for solar “activity” see
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
Based on the sharp break in solar activity seen during 2005-6 in the Ap index in Fig 13 in the linked post we might anticipate a noticeable cooling below the 2003-14 cooling trend in 2017-18.
Added note the RSS graph also shows nicely the short term warming and relative amplitude of the 1998 and 2010 El Nino excursions
“The RSS data shows the relative temperatures better than the land/ sea measurements which have been homogenized ,reanalyzed and readjusted by various obscure algorithms so as to be highly questionable.”
RSS has also been adjusted. Wait until they fix the problem that Roy Spencer has pointed out.
pause is dead
From Dr. Spencer:
UAH is using version 5.5, however a more accurate version 6 has been in the works for a while, but it is not completed. Hopefully it will narrow the gap when it is done.
Mosher here shows his extreme bias. Gavin Schmidt says 62% chance that 2014 was not the warmest year.
Mosher declares “the pause is dead” based on such data.
Egregious judgement Mosher. Very bad problem for your credibility.
You mean they are adjusting data?
Ask them if they will post their code.
Your wishing it so does not make it so.
No, no, it’s just sleeping. It’s not an ex-parrot, I mean ex-pause, just yet 🙂
Rich.
I think the problem is that if the sat trends don’t fall into line with ground trends (sat trends should actually be *greatest* in the mid troposphere), it’s going to make Mosher’s endorsement of the ground records look pretty stupid. He is invested like certain other groups, in a particular outcome.
mpainter- credibility?
“Pause is dead” Which, if true, still doesn’t prove what’s killing the “pause”.
Just like how they had to get rid of the MWP, they now have to get rid of the pause. Pure irony, the only time they make any major adjustments is when they are on an ideological crusade, first to prove UAH wrong now to prove UAH right only because it hurts the skeptic message.
Mosher always whines about posting code but he never received the proper education to be able to understand it. Giving an English major computer code to look at is like asking a barber to decipher hieroglyphics.
Someone FOIA RSS so we can see the trail of Mosher e-mails begging them to adjust their data.
Stating “no pause” for some of the data sets is misleadingly incomplete.
If there is a positive trend, what is it?
How does that trend compare to the doom and gloom predictions of ‘global warming’ for the last 20 years?
See Lord Monckton’s article here:
http://wattsupwiththat.com/2015/01/28/global-warming-is-still-on-the-great-shelf/
I keep reading about “statistically significance” and “confidence limits”. To calculate statistical significance or confidence limits requires specifying a statistical model. In the context of a global temperature, what exactly is the statistical model being used, What are the underlying assumptions, and what evidence is there to suggest that these underlying assumptions are realistic?
Read Ross Mckittrik’s newish paper on that. Answers all your questions.
I have a number of devices to take temperature readings (just run of the mill department store items) and am amused how they never report the same temperature, They tend to be within a degree (Fahrenheit) or 2 of each other.
It is almost comical, how difficult it is to determine what the temperature was at any given place at any given point in time. You would think it was easy. But it is not, it is enormously complex and error prone.
As Hager said, “No has told us that”.
My question is why? How can something become so widespread without reasonable rational support? Are we leaving the age of enlightenment and rationality and reverting back to the dark ages?
Thanks, Werner.
I think that your pause is too stringent.
Good point! As long as the slope is zero, no one can argue about it, at least in one sense. But once it is a low value, no matter how small, it can be debated. So if I am too stringent, and I may well be, exactly where do we draw the line? Do “we” decide the pause is there as long as the slope is no larger than 0.03 C/decade for example?
Thanks for your clarity. But I cannot trust slopes, they look too much like predictors.
For monthly temperatures, I prefer watching the value of a 13-month moving average.
And I do agree that your look-back method is proper and shows valid results.
The pause after the 1998 El Niño goes on. To me, this pause means that CO2 cannot be the control for the Earth’s temperature.
I have for a long time wondered why no one has taken note of the fact that we are in an interglacial period during which the earth continues to warm…until it doesn’t. It would be startling if it didn’t.
At least in Greenland, the GISP2 ice core says the Holocene peak was about 8 millennia ago, and the trend has been slight cooling since. The RWP was warmer than the MWP. The LIA was cooler than the Dark Age that followed RWP and preceeded MWP.
Yes, it seems that we are already well into the down slope of this interglacial i.e. 10,700 years of GISP2, with CO2 from EPICA DomeC:
[caption id="" align="alignnone" width="578"] climate4you.com – Ole Humlum – Professor, University of Oslo Department of Geosciences – Click the pic to view at source[/caption]
Looked at from a longer perspective, this interglacial has seen a significantly slower decline than the last one, i.e. 140,000 Years, Vostok, Petit et al., 1999:
[caption id="" align="alignnone" width="578"] BP.Blogspot.com – Click the pic to view at source[/caption]
and it is this extended interglacial that has allowed our civilization to blossom, i.e. 400,000 Years – Vostok – Petit et al., 1999
[caption id="" align="alignnone" width="578"] CDIAC ORNL – Click the pic to view at source[/caption]
The real climate question humans should be grappling with is, when is the current interglacial going to end and what, if anything, can we do about it?
Yeah, the thing to look for is not arctic ice melt, but Canadian snow that doesn’t melt in the summer.
I have included this GISP2 Fig from Humlum as the key Fig in making climate forecasts — Fig 5 in my post at
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
It shows clearly the millennial periodicity superimposed on the insolation driven Holocene peak and decline.
Fig 9 shows the general trends during the last 1000 year cycle and Fig 14 shows the solar driver peak in 1991 and Fig 13 the big drop in solar activity in 2005/6 which might be expected to bring significant cooling in 2017-18. There is about a 12 year delay between the solar driver peak and the RSS temperature peak
in 2003 . (see comment and link at 6:40 AM above)
justthefactswuwt
There it is again. A proxy series that represents the whole world?
And people complain about infilling.
In this case the infilling is also done with a proxy record that ends in 1855. The 1855 values is infilled to “now”.
That really is something. Where is the outrage?
rooter
There it is again. A proxy series that represents the whole world?
So you suppose that interglacials are hemispheric or regional events?
And people complain about infilling.
Yes, because infilling is made up data, whereas this is actual data
In this case the infilling is also done with a proxy record that ends in 1855. The 1855 values is infilled to “now”.
No, there’s no infilling, all of the charts I posted end at year at year zero, which is 1855.
That really is something. Where is the outrage?
Outrage is left to the Warmists to spray in random directions, we’ll keep focused on a sober analysis of the facts.
justthefactswuwt says:
So you suppose that interglacials are hemispheric or regional events?
When answering how one proxy series represents the whole world. That is: infilling with one proxy series is ok because interglacials are not regional. One proxy is enough.
And goes on to say:
“Yes, because infilling is made up data, whereas this is actual data”
When responding to what is wrong with infilling. The making up of data is always done with actual data when infilling. And in this case jfw makes up data for the whole world from one set of actual data.
After that he says:
“No, there’s no infilling, all of the charts I posted end at year at year zero, which is 1855.”
Nowhere did he mention that the last year in that graph was 1855. And he does not know that the other charts have other end points than 1855.
Whatever happened to facts justthefactswuwt?
rooter February 4, 2015 at 1:31 am
When answering how one proxy series represents the whole world. That is: infilling with one proxy series is ok because interglacials are not regional. One proxy is enough.
You don’t make any sense, infilling is a known quantity, which this obviously is not. If you visit the WUWT Paleoclimate Page;
http://wattsupwiththat.com/reference-pages/global-weather-climate/paleoclimate/
and learn a bit, you’ll realize that there are several different proxies, and then I’ll leave it to you to decide whether interglacials are global, hemispheric or regional events….
And in this case jfw makes up data for the whole world from one set of actual data.
Again, you just don’t make sense, what is your point, you don’t think interglacials occur?
Nowhere did he mention that the last year in that graph was 1855. And he does not know that the other charts have other end points than 1855.
All of the graphs I showed have an end point of 1855, and I didn’t note it because 1855 was totally irrelevant to the point I was making, i.e. that we are on the down slope of this interglacial. Try reading the following thread and it’s predecessors and you might learn a bit:
http://wattsupwiththat.com/2013/04/13/crowdsourcing-the-wuwt-paleoclimate-reference-page-disputed-graphs-alley-2000/:
Whatever happened to facts justthefactswuwt?
Still here, apparently having an argument with, makesnosense…
justthefactswuwt says:
“You don’t make any sense, infilling is a known quantity, which this obviously is not. If you visit the WUWT Paleoclimate Page;
http://wattsupwiththat.com/reference-pages/global-weather-climate/paleoclimate/
and learn a bit, you’ll realize that there are several different proxies, and then I’ll leave it to you to decide whether interglacials are global, hemispheric or regional events….”
Just to defend his infilling the whole world from one proxy. Even infilled forward from 1855.
And justthefactswuwt did not follow his own advice and checked different proxies. He used only one to infill the whole world.
justforthefactswuwt is trying hard to show something:
“All of the graphs I showed have an end point of 1855, and I didn’t note it because 1855 was totally irrelevant to the point I was making, i.e. that we are on the down slope of this interglacial. Try reading the following thread and it’s predecessors and you might learn a bit:
http://wattsupwiththat.com/2013/04/13/crowdsourcing-the-wuwt-paleoclimate-reference-page-disputed-graphs-alley-2000/:
Whatever happened to facts justthefactswuwt?
Still here, apparently having an argument with, makesnosense…”
What he shows that he has not checked the facts. All ended in 1855… Wrong.
But if that was true, what is the purpose of stopping in 1855. Hide the incline?
justthefactswuwt February 3, 2015 at 7:41 am
Yes, it seems that we are already well into the down slope of this interglacial i.e. 10,700 years of GISP2, with CO2 from EPICA DomeC:
How about adding the most recent 160 years of data?
rooter
Just to defend his infilling the whole world from one proxy. Even infilled forward from 1855.
And justthefactswuwt did not follow his own advice and checked different proxies. He used only one to infill the whole world.
At least use the right word, what you are referring to “extrapolating”, not “infilling”. And yes we have to extrapolate to assess paleoclimatic data, as there hasn’t been very good satellite coverage over the last several hundred thousand years…
rooter
When responding to what is wrong with infilling. The making up of data is always done with actual data when infilling. And in this case jfw makes up data for the whole world from one set of actual data.
Can you show the “data” that I was “making up”, e.g. here’s an example of the data that HadCRUT makes up:
http://wattsupwiththat.com/2014/11/05/hadcrut4-adjustments-discovering-missing-data-or-reinterpreting-existing-data-now-includes-september-data/
Nowhere did he mention that the last year in that graph was 1855.
Because it is irrelevant to the point I am making, which is that we are on the downslope of this interglacial. Do you disagree with this point? Do you think that we are currently at the warmest point in the current interglacial? Can you present any evidence to support any of your assertions? Are you familiar with the term hyperlink and have you ever attempted to use one?
What he shows that he has not checked the facts. All ended in 1855… Wrong.
There’s a lot of conjecture around the end point of ice cores. This article claims that for Vostok “The present day (year zero) is deemed to be 1995, the year that the cores were drilled.”:
http://euanmearns.com/the-vostok-ice-core-temperature-co2-and-ch4/
Petit et al. makes no reference to present day:
http://www.daycreek.com/dc/images/1999.pdf
NOAA makes no reference to present day:
http://www.ncdc.noaa.gov/paleo/icecore/antarctica/vostok/vostok.html
CDIAC refers to the Collective Period of Record for Vostok and Dome C as 137 to 795,000 years Before Present (B.P.) (Before year 1950)
http://cdiac.ornl.gov/trends/co2/ice_core_co2.html
What do you think present day is for Vostok and what does it have to do with the argument at hand?
Phil.
How about adding the most recent 160 years of data?
That is an an exercise in voodoo, as was discussed in depth on this thread;
http://wattsupwiththat.com/2013/04/13/crowdsourcing-the-wuwt-paleoclimate-reference-page-disputed-graphs-alley-2000/
hence graphs that do so have earned the ignominious title of “Falsified Graphs” and are located at the bottom of the WUWT Paleoclimate page:
http://wattsupwiththat.com/reference-pages/global-weather-climate/paleoclimate/
The reason they are consider falsified is that they mix incomparable data sets. Regardless, below are a couple of them, note that even with the voodoo and highly questionable assertion that current temps are warmer today than during the medieval warming period, they still support my statement that we are on the down slope of the current interglacial:
[caption id="" align="alignnone" width="542"] Tinypic – Click the pic to view at source[/caption]
[caption id="" align="alignnone" width="542"] Tinypic – Click the pic to view at source[/caption]
Now that we’ve gotten that out of the way, you never responded to my comment here;
http://wattsupwiththat.com/2014/03/09/mysterious-new-man-made-gases-pose-threat-to-ozone-layer/#comment-1595459
where I completed my demonstration of the natural causes of the “Ozone Hole”. Would like to admit that you were wrong on that front?
I’d like a T-shirt showing the first graph but with only the x-axis labelled. That way, people would have to ask what the y-axis represented.
The bottom line is the pause continues and soon will be over. That being a down trend in global temperatures.
It won’t be long now and it will be very entertaining.
This is going to be an unpopular comment, but here goes. I never understood the witchhunt for the pause. I thought it made skeptics look silly, so I’ve avoided it as an argument. Picking short periods to prove we aren’t warming when the laws of physics say we should be warming was borderline slayer in my mind. We can just look at the temperature record from the start of the industrial era instead and just say hey guys we aren’t warming as fast as you said we would be, you are overestimating the feedbacks and the current path the temperature is heading isn’t that alarming.
The significance of the pause is twofold. First, at 18 plus years it invalidates the CMIP5 models, and therefor the longer term CMIP5 projections for temperature and climate sensitivity. That unsettles the supposedly settled IPCC comsensus. Second, it re-introduces natural variability, which means the IPCC’s “mostly” CO2 is wrong. Which itself explains why CMIP5 failed the equivalent of out of sample validation. The models were parameterized per the CMIP5 experimental design to best fit the warming from about 1975 to about 2005. That only works if the observed warming was attributable to CO2. And the pause proves it wasn’t. (Some, probably. How much nobody knows.)
I think the real point is that the pause counteracts the warming spell from 1979-98. As such, the underlying rate of warming is much less than originally believed, as you rightly say.
It reemphasises the fact that natural factors are still dominant and that climate sensitivity is much less than feared.
PDF document @NOAA.gov. For anyone else who wants it, the exact quote from pg 23 is:
”The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf
Santer’s 2011 paper later said 17 years rather than 15. Its been 18.
However Werner that quote refers to simulations which do not include ENSO, after ENSO-adjustment you don’t have an absence of warming over 15 years.
RSS is flat for 15 years starting in January 2000 as well. So you can even ignore 1998 and still falsify the models. See:
http://www.woodfortrees.org/plot/rss/from:2000/plot/rss/from:2000/trend
Furthermore, here we have a slope of zero and not zero at the 95% level.
Phil, actually your comment is narrowly incorrect, and also carries a hidden hook. You use the official WMO explanation for the pause. Which conveniently overlooked that their own chart in WMO 2013 did not support the explanation. See essay Unsettling Science; the book went to press 10/14 so you might want to check WMO 2014 which is now out.
The hook is that there is no single ENSO definition, and NOAA changed theirs recently tomadd even more confusion. In general there seems to be a relative Nina/Nino natural variation that the models do not account for, but should have. Another hindcast parameterization problem. Bob Tisdale is a go to font of knowledge on this.
Brozek:
Why not show UAH as well?
http://www.woodfortrees.org/graph/rss/from:2000/plot/rss/from:2000/trend/plot/uah/from:2000/plot/uah/from:2000/trend
This is version 5.5. Version 5.6 has a higher trend. Higher temperatures after 2012
We may need to wait for version 6 to show the same as RSS. In the meantime, WFT only shows 5.5.
Rud Istvan February 3, 2015 at 1:34 pm
Phil, actually your comment is narrowly incorrect, and also carries a hidden hook.
No it is accurate, the quote which Werner linked refers explicitly to simulations which exclude ENSO as reading the page referred to makes clear. I have yet to see anyone on here who makes that quote actually mention the context in which is made.
The Pause is not a witchhunt, it’s an observation that strikes at the core of the Catastrophic Anthropogenic Global Warming narrative. The start of the industrial era is irrelevant, anthropogenic CO2 emissions did not reach levels that could have a significant influence on Earth’s temperature until ~ 1950:
http://wattsupwiththat.com/2014/03/29/when-did-anthropogenic-global-warming-begin/
Furthermore, there was no global warming during 1950s or 60s, and while Earth did warm from the mid-1970s to 1998, the warming from 1860-1880, 1910-1940 and 1975-1998 are statistically identical.
http://wattsupwiththat.com/2014/01/25/when-did-global-warming-begin/
Lastly, since, 1998, as the Economist pointed out, “The world added roughly 100 billion tonnes of carbon to the atmosphere between 2000 and 2010. That is about a quarter of all the CO₂ put there by humanity since 1750. And yet, as James Hansen, the head of NASA’s Goddard Institute for Space Studies, observes, ‘the five-year mean global temperature has been flat for a decade.’”
http://www.economist.com/news/science-and-technology/21574461-climate-may-be-heating-up-less-response-greenhouse-gas-emissions
As such, the Pause invalidates the entire premise of the CAGW narrative, i.e. large increases in anthropogenic CO2 emissions do not appear to result in significant increases in Earth’s temperature. Why would we not want to highlight that?
Even more fun are some of the easily falsifiable explanations coming out of the warmunist camp. Mann denying there is a pause in his April 2014 article in Scientific American being the funniest. Trenberth’s paper sayingbthe heatnsuddenly decided ti hide in the deep oceans where ARGO cannot measure it being second funniest. Unsettled, they are.
Well said.
http://www.cru.uea.ac.uk/documents/421974/487107/gtc.gif/dcb0816d-8f1d-4ffc-9936-8be0cafdd47f?t=1396623374258
(from the CRU’s mouth)
“Witchhunt” is a bit extreme, but if you apply the term to the search for the Pause, you should also apply it to the search for it’s lack and even the to the search for CAGW in the first place. The discussion long ago spilled out of academia and into politics and has taken on a Frankenstein persona in that arena. There is no “Law of Physics” that explicitly states that CO2 (or CH4) will warm the Earth. Physics only tells us that those molecules interact differently with certain wavelengths than O2 and Nitrogen. There are peer reviewed papers that claim this will warm the planet and other peer reviewed papers that say it will not, or at most trivially. The actual effect of any and all GHG in a real atmosphere is not, as far as this non-scientist can tell, anywhere near settled. The myriad of feedbacks, positive and negative with varying time constants from minutes to centuries and possibly many still unknown, promise to make the science side a hotly debated for many years.
The Witchhunt is mostly on the political side. In my view, the science will probably be much closer to being settled in 10-20 years or so. But politics works much faster than that. The CAGW crowd, aided by and allied with groups with vastly different agendas, want to enact draconian changes to existing socio-economic structures and they want to do it NOW. And they are winning the political war at the moment. The sceptic side has, IMHO, better science on their side but are seriously out-gunned in the political arena and need time to gather data and organize politically.
Witchhunt? Yeah. The CAGWers are trying to burn the evil pause witch at the stake and the sceptics are busy peeing on the matches and calling for the fire trucks…
Regards,
A question that I am sure others have asked before , inspired by jtfWUWT’s charts above :
” what equation/model/handwaving/ positive feedback explains why the temperature rises so quickly after a glacial . and to approx the same max temperature of 2-3C above the bench mark level , but then relaxes back much more slowly and discontinuously?”.
Is the Earth in this quaternary period to be more accurately described as a near- glacial planet than a temperate one , given that it seems to spend more time below the bench mark temperature .
One could perhaps almost be forgiven for thinking that the equilbrium temperature is -8 C below benchmark , and is periodically and brutally disrupted by global warming.
The usual reaction of warmists is to say that “satellites measure different things”.
Which is very strange, because in 2013 the UK Met Office said:
“Changes in temperature observed in surface data records are corroborated by records of temperatures in the troposphere recorded by satellites”
https://notalotofpeopleknowthat.wordpress.com/2015/01/17/met-office-say-surface-temperatures-should-agree-with-satellites/
Well, except when they are not!
Thanks, Paul. It is not only satellites, not even homogenized thermometers are helping the church of global warming these days.
Not only satellites..
Well, that depends
http://www.woodfortrees.org/graph/gistemp/from:1997/offset:-0.35/compress:12/plot/hadcrut4gl/from:1997/offset:-0.26/compress:12/plot/rss/from:1997/offset:-0.10/compress:12/plot/uah/from:1997/compress:12/plot/gistemp/from:1997/offset:-0.35/trend/plot/hadcrut4gl/from:1997/offset:-0.26/trend/plot/uah/from:1997/trend/plot/rss/from:1997/offset:-0.10/trend
Hadcrut4 not updated to the end of year here so the trend for that will be higher. As UAH version 5.6. So if you want to ditch surface you will have to ditch UAH. Actually you will have to ditch everything except RSS. Keep the outlier.
Nonsense. both RSS and UAH show 1998 as significantly warmer then 2014.
Then, of course, we had this paper from 2004, which found:
For years the debate about climate change has had a contentious sticking point — satellite measurements of temperatures in the troposphere, the layer of atmosphere where most weather occurs, were inconsistent with fast-warming surface temperatures.
But a team led by a University of Washington atmospheric scientist has used satellite data in a new and more accurate way to show that, for more than two decades, the troposphere has actually been warming faster than the surface. The new approach relies on information that better separates readings of the troposphere from those of another atmospheric layer above, which have disguised the true troposphere temperature trend.
“This tells us very clearly what the lower atmosphere temperature trend is, and the trend is very similar to what is happening at the surface…..
What remained indicated that the troposphere has been warming at about two-tenths of a degree Celsius per decade, or nearly one-third of a degree Fahrenheit per decade. That closely resembles measurements of warming at the surface, something climate models have suggested would result if the warmer surface temperatures are the result of greenhouse gases
https://notalotofpeopleknowthat.wordpress.com/2015/02/01/satellites-confirm-global-warming-standstill/
“a University of Washington atmospheric scientist has used satellite data in a new and more accurate way”
Another model no doubt…..
Yes one that Spencer et al. use to extract the temperature from the microwave signal. Unfortunately Spencer et al. had made some error in their model and didn’t allow for the decay in the orbit of the satellite for example, also a sign error in one of the terms. Once they made those adjustments the satellite results had an increased trend, much closer to the surface measurements.
unlike satellite data providers who dont release their adjustment code
here is the NCDC code.
I’ve pointed it out many times.
Some back ground
http://climatecode.org/blog/2011/07/homogenization-project-progress/
a simple to find web page
http://www.ncdc.noaa.gov/oa/climate/research/ushcn/#homogeneity
now dont blow a head gasket. or falsely clain the sofware is not out there
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/software/
Steven,
Thanks for pointing out the code. What application do I need to open the source code? Is it commented?
What is the point at looking at the code when the code does absurd things like adjust temperature records backward in time and does other things that are absurdly non-real? They acknowledge they do it,then simply ignore all criticism. When you ask why, they just tell us, “that’s the way we do it.” If you’re adjusting temperatures at least, at an absolute minimum, adjust them in some way consistent with physical reality.
This is the wrong question. There is a long term upward trend in temperatures, which has been evident for over a century and was in place before CO2 could have been driving it.
So, the question is not, are the temperature sets rising, but are they rising beyond the long term trend?
The answer is: No. They are not rising, They are all, all of them, are falling relative to the long term trend.
And, BTW, the obvious ~60 year pattern which remains after taking out the long term trend is very regular, and also was in place before CO2 could have caused it. Take that component out, and there is very little if anything left which could be influenced by CO2.
All things being equal, increased CO2 concentration in the atmosphere should produce warming of the surface. But, all things are NOT equal. In a massively complex feedback system, such as the Earth’s climate system, there is no guarantee that changing levels of CO2 will have any effect at all on global surface temperatures and, in fact, the data say that they do not have a significant impact.
The 2014 Canadian winter was the coldest since 1996, what happened to Arctic amplification, for that matter what happened to cAGW in Canada as we are again in the grip of an extremely cold winter ??
Frankly I don’t believe global measurements reflect AGW after one looks at the well hidden error bars for land, sea and satellite statistical machinations.
And even if we accept that some warming is happening, the big question is how much man is contributing to it.
“Arctic amplification” could be said to be that blast of cold air that arrives in southern Canada and the U.S. when the jet stream flows southward across all that northern snow.
Apparently they have not yet figured out how to credibly “adjust” the satellite data to show warming. My optimistic side wants to think that they never will.
Meanwhile in the equatorial Pacific…
The weakening series of Kelvin waves has given way to a cold subsurface pool in the eastern tropical Pacific:
http://www.bom.gov.au/climate/enso/sub_surf_mon.gif
Trades are holding up robustly.
We have the possible suggestion of the beginning of a cold tongue:
http://www.bom.gov.au/fwo/IDYOC001.gif?1297119137
Also, perhaps most importantly, the Peruvian anchovy fishery which was depleted due to the mild el Nino, is on the road to recovery led by a very strong juvenile year class:
Research body: Peruvian sea conditions favoring anchovy biomass recovery. 9
January 2015.
Current sea conditions off the Peruvian coast are favoring the recovery process of
anchovy biomass, said Enfen, the national committee watching El Nino phenomenon.
Anchovy has expanded its spatial coverage in the Peruvian sea, although juveniles
account for most of the resource.
http://www.undercurrentnews.com/2015/01/09/research-body-peruvian-sea-conditionsfavoring-anchovy-biomass-recovery/
Strong juvenile recruitment in a pelagic fish like the anchovy means only one thing: strong upwelling bringing up nutrients and increasing larval survival via increased food item density in the spawning grounds.
All the above pointing to a La Nina.
Fascinating. Thanks.
The surface records have been so manipulated, with known defects, this is not entirely surprising.
Its getting cold in the eastern equatorial Pacific(complete bust).
We`ll see what happens in 2015.
But this discrepancy MUST be explained.
You mean this discrepancy:
http://www.woodfortrees.org/graph/gistemp/from:1997/offset:-0.35/compress:12/plot/hadcrut4gl/from:1997/offset:-0.26/compress:12/plot/rss/from:1997/offset:-0.10/compress:12/plot/uah/from:1997/compress:12/plot/gistemp/from:1997/offset:-0.35/trend/plot/hadcrut4gl/from:1997/offset:-0.26/trend/plot/uah/from:1997/trend/plot/rss/from:1997/offset:-0.10/trend
Even in the 1960s it was already clear that human influences were becoming large enough to significantly affect the climate, and the evidence has only grown steadily more robust in the succeeding decades.
In 1981 James Hansen predicted that the warming effect of our greenhouse gas emissions would be distinguishable from natural variability by the end of the 20th Century, and he was proven correct.
Many other AGW predictions have now been confirmed by observations, such as:
Arctic sea ice is in rapid decline.
Antarctic ice shelves have collapsed and disintegrated.
Global sea level is rising, and the rise is accelerating.
Antarctica is deglaciating.
Greenland is deglaciating.
Mountain ice caps and glaciers are melting worldwide.
Climate zones are shifting polewards and uphill.
The atmosphere is becoming more humid.
Extreme heatwaves have increased by more than a factor of 10.
The Arctic is warming 3 times faster than the global mean.
Snow cover is declining.
Ocean heat content is rising.
The tropical belt is widening.
Storm tracks are shifting polewards.
Jet streams are shifting polewards and becoming more erratic.
Permafrost all over the northern hemisphere is warming and thawing.
Just how much confirming evidence does a person need, before accepting that the scientists have been proven right?
WarrenLB, surely you jest. Arctic is is recovering, just like after the last decline to the mid 1940s. See essay Northwest Passage. No Antarctic Ice shelves have collapsed and disintegrated. See essay Tipping Points. Global sea level rise as been going since tide guages but is, if anything, decelerating. See essay PseudoPrecision. And so on and so on. Essays cAGw, Extreme Extremes, and Credibility Conundrums debunk more of your list. Read ebook Blowing Smoke.
When you cite a litany like that, check the facts first. Otherwise you just show that you have drunk the cool aid that was offered and you have ceased thinking for yourself.
All facts. Check them out yourself. You’re missing the picture.
I did. Spent three years doing so. Then wrote the ebook covering what I found. An ugly picture of consensus climate science emerges. Illustrations,,examples, footnotes, the whole shebang. Stuff anyone can grasp and verify. You might learn something from the book. Some of what you have apparently imbibed amounts to academic misconduct. Essays A High Stick Foul, By Land or By Sea, and Shell Games. ‘Science by press release’ via alarmingly misleading PR. Essays Blowing Smoke, Good Bad News, and Last Cup of Coffee. Palpably bad science. Essays Burning Nonscience, Cause and Effect, and No Bodies. There is even stuff that is just made up to scare folks like you into thinking climate change is a big problem. Essays Polar Bears, Credibility Conundrums, and Somerset Levels. All driving nakedly corrupted agendas. Essays Carribean Water and California Dreaming.
Nullius in Verba
warrenlb says:
All facts. Check them out yourself.
That is a classic example of a baseless assertion. Just because someone writes “all facts” means nothing. I did ‘check them out’ myself. They are not facts, as Rud Istvan shows. They are beliefs. Incorrect beliefs.
Rud already provided sources to check regarding this nonsense:
Arctic sea ice is in rapid decline.
Antarctic ice shelves have collapsed and disintegrated.
Global sea level is rising, and the rise is accelerating.
Here are more links falsifying warrenlb’s claims:
Arctic sea ice has risen above it’s multi-year average. Older Arctic ice has increased dramatically. And global ice has sharply increased.
Next, this graph is from Nature, hardly a climate skeptic journal. It is coroborrated by multiple satellite measurements. There is no acceleration in sea level rise, and very likely, a deceleration. Mean sea level is the same as it was twenty years ago.
Next, much of the sea level rise is due to isostatic adjustments. Again, satellite measurements show that the sea level has been rising at the same rate for many years. There is no acceleration.
Almost every other item in warrenlb’s list is flat wrong as well. Rather than writing a long post falsifying his assertions, I invite warrenlb to take his best shot. Pick the item he believes is the most defensible, and post it. We will see who has the facts, measurements, and evidence — and who is arguing by assertion.
Dbstealey.
…
You made another big error.
..
You posted this graph
..
http://sealevel.colorado.edu/files/2014_rel4/sl_mei.png
..
As evidence “Mean sea level is the same as it was twenty years ago.”
…
Look closely at your graph. Read the words “GMSL (60 day smoothed detrended, seasonal signals removed)
..
Notice the word “detrended”
…
Do you know what “detrended” means?
…
Yup, it means the trend has been removed
Another bogus graph.
..
Better luck next time.
Ah. Cherry-picking one item. What about all the others?
Let’s take them one by one.
PS Dbstealey
…
This graph
..
http://sealevel.colorado.edu/files/2014_rel5/sl_ns_global.png
…
Shows that in the past 20 years sea levels have risen about 50 mm (2.3 inches)
Yes, correct. You are right. Sometimes that happens when I’m posting lots of charts. I have lots more if you’d like to see them.
I’m happy to have made your day. After bird-dogging hundreds of my comments looking for a mistake, you did finally find one. However, you remain flat wrong in your belief that there is any credible evidence that human emissions are a problem. They’re not.
Now, do you agree with warrenlb’s list here? :
Arctic sea ice is in rapid decline.
Antarctic ice shelves have collapsed and disintegrated.
Global sea level is rising, and the rise is accelerating.
Antarctica is deglaciating.
Greenland is deglaciating.
Mountain ice caps and glaciers are melting worldwide.
Climate zones are shifting polewards and uphill.
The atmosphere is becoming more humid.
Extreme heatwaves have increased by more than a factor of 10.
The Arctic is warming 3 times faster than the global mean.
Snow cover is declining.
Ocean heat content is rising.
The tropical belt is widening.
Storm tracks are shifting polewards.
Jet streams are shifting polewards and becoming more erratic.
Permafrost all over the northern hemisphere is warming and thawing.
Either warrenlb is right, or he’s wrong. What say you? Opinions and assertions rejected. Post facts.
Like I posted. Here is the chart of multiple satellite measurements showing there has been no acceleration in sea level rise — one of the central tenets of the alarmist cult:
http://www.aviso.oceanobs.com/fileadmin/images/news/indic/msl/MSL_Serie_ALL_Global_IB_RWT_NoGIA_Adjust.png
Dbstealey.
..
Yes, the chart you posted does show the acceleration in sea level rise.
..
SInce the average for the 20th century was less than 2 mm/yr, going from 2 mm/yr to 2.76 mm/yr is a 37% increase
..
http://upload.wikimedia.org/wikipedia/commons/0/0f/Recent_Sea_Level_Rise.png
Like just about everyone here at this Best Science site, I flatly reject wikipedia propaganda. Just because one of Wm Connolley’s lemmings draws a red line in a chart means nothing.
Satellite data is the most accurate data by far. There is no comparison, except possibly with enough tide gauges [which tend to matlch satellite data]. The satellite data shows beyond douybt that sea level rise is on the same long term trend line as it has been. No acceleration:
http://sealevel.colorado.edu/files/current/sl_ns_global.png
I might add that this article corroborates what I wrote. Anyone disputing satellite data needs to show why it is wrong — explaining why all satellites are wrong and why tide guages are all wrong [one of my links is to a peer-reviewed paper explaining that].
None of them support the acceleration nonsense. That is only a debunked talking point by the rapidly dwindling number of mouth-breathing head nodders at Hotwhopper and similar blogs.
If sea level rise was accelerating, we wouldn’t be hearing about it by assertion, from the clueless. It would be above-the-fold news barked at us 24/7/365. But we hardly ever hear nonsense like that, even from the suckup media. It’s true that they don’t report the facts when the facts are inconvenient. That’s what WUWT does:
http://notrickszone.com/wp-content/uploads/2012/12/Puls_2.jpg
Dbstealey
..
No one is arguing that the satellite data is wrong.
They are all measuring about 3 mm per year of sea level rise.
..
However, historical data says that the average for the 20th century was 1.7 – 2.0 mm/year.
..
So, all of your “charts” are showing the acceleration.
..
Thank you for that.
Anyone who is a MMGW True Believer will look at a chart like the one above, or below from the journal Nature, which clearly shows that global SL has been decelerating, and conclude that Down is Up, Ignorance is Strength, and SL is Accelerating. But that’s OK, Scientologists believe in volcano gods, too.
Rational folks look at the data and understand that the SL scare is just more bogus alarmism:
http://www.nature.com/nclimate/journal/v4/n5/carousel/nclimate2159-f1.jpg
Go argue with Nature if you don’t like it.
Reluctant to add anything to this silly food fight.
Any presatellite tide gauge data has to be corrected for isostatic rebound and plate tectonics. Mostly it isn’t, just averaged. Well, that makes a fine hash.
The satellite era stuff has to be understood within the limitations of the sat instrumentation. Radar altimetry of the oceans from orbit is very difficult. Stuff like waves mess it up (oceans have waves, as you should know). Read essay Pseudo Precision in my book, then do your own research, then get back on the fundamental SLR closure problem showing that sometimes we just do not know.
Which in normal, but apparently not in post normal, science is OK.
Dbstealey
..
Another big error
…
Look at the label on your “chart”
..
Note “Processing group”
…
Try and post something that is not a model
The average rate of sea level rise since the last ice age is in excess of 6 mm/yr
When the subject isn’t going your way, just change the subject, eh warren? It’s in the Warmist troll playbook, after all.
warrenlb – you’re not even wrong.
Rich.
The atmosphere is not a greenhouse. I’ve been trying for decades to get my old house to heat up my woodstove. Just doesn’t work. A cooler body (house, atmosphere) cannot warm a warmer body (woodstove, Earth). Laws of Thermodynamics. Warming of the Earth happens exclusively by the Sun’s day cycle with some heat subsequently contained in the latent phase states of H2O. CO2 can’t heat anything, but is excellent plant food.
Hansen’s best case scenario — Scenario C — assumed a significant reduction in CO2 emissions. In fact the opposite happened. Despite this, his prediction of warming far exceeded what actually occurred. So Hansen was in fact wrong.
Further to this point, Hansen, Jones and now even Schmidt have (reluctantly) admitted that there has been a “pause”. Show me one Climate Scientist or model that predicted this,. You can’t. So how can they been proven right at this point? That got it wrong. It may not be the elephant in the room for you, but it’s there.
The other elephant in the room is the litany of things Warrenlb cites simply arent true, or else lack all sense of proportion. Deliberately misrepresented or distorted by the warmunist consensus. Like every single example in the lead chapter of the US government’s official 2014 National Climate Assessment. Every single one, dissected individually in essay Credibility Conundrums. Pure propaganda in service to the present administration’s agenda: the science is settled so climate change must be happening, so make some stuff up to fool yhe public into thinking it is true.
BTW, the research put me into the lukewarmer camp with Judy Curry and Matt Ridley.
I suggest you actually read Hansen’s paper rather than believe Monckton’s misrepresentation of it.
Hansen’s paper showed that the near term temperature increase would be mostly due to GHGs other than CO2 (fig 2). Scenario C assumed an end to the production in CFCs and an end to the growth of CH4, if those occurred then a pause in temperature growth was anticipated. That emission scenario actually happened.
Show me one Climate Scientist or model that predicted this
Hansen’s scenario C did (fig 3)!
could have said this 100 years ago as well, and unless you have seen it with your own eyes how do you know?
i always wondered just exactly how people can argue that sea level rise in 1910 (in the david socrates graph) was due to man made global warming, and here we have the answer. just some sort of disconnect from reality by saying well it really looked like it picked up in the 1930s.. WHAT?! this is a graph of human emissions v muana loa-
http://c3headlines.typepad.com/.a/6a010536b58035970c0120a7821ba9970b-pi
how the hell can anyone claim that the co2 levels in 1958 changed the sea level in 1910.
warrenlb and David Socrates, here’s NOAA NCDC Sea-Level Deviation back to 1870:
[caption id="" align="alignnone" width="578"] NOAA – National Climatic Data Center – Click the pic to view at source[/caption]
The rate of sea level rise before and after ~ 1950, when anthropogenic CO2 emissions became potentially significant, appear to be essentially identical. Where do you see any anthropogenic influence on sea level and can you present any evidence to support this assertion?
Good question. I look forward to their reply.
Great comment. Had not thought about the SLR issue that way. Thanks for the perspective.
You show a graph that shows sealevel rise that coincides with CO2 rise.
rooter, please. Get a grip.
Sure does look like it took off after 1930
David Socrates and rooter
As I pointed out above, anthropogenic CO2 emissions were not potentially significant until ~ 1950, so what do you think made sea level rise “look like it took off after 1930”? Furthermore, Earth didn’t start warming thereafter until ~ 1975, thus unless you want to argue that the sea level rise between 1950 – 1975 was caused by magical heat magically avoiding being measured in the atmosphere while being magically absorbed by the oceans, and resulting in thermal expansion, then the sea level rise between 1930 – 1975 should be chalked up to nature.
“CO2 emissions were not potentially significant until ~ 1950,”
…
Except that the Industrial Revolution started 100 years earlier, when coal was substituted for firewood as a primary energy source.
Right. Except that CO2 releases did NOT MATTER to the earth’s CO2 balance until post- 1950. When global average temperatures fell, remained steady, rose, and remained steady. Exactly like they fell, remained steady, rose, and remained steady BEFORE CO2 levels increased post 1950.
David Socrates
Except that the Industrial Revolution started 100 years earlier, when coal was substituted for firewood as a primary energy source.
Thanks David, here I was thinking that Patton and Rommel where renown for their advanced equine attack strategies. Regardless, if you do some reading to get caught up;
http://wattsupwiththat.com/2014/03/29/when-did-anthropogenic-global-warming-begin/
you’ll see that there is no evidence that anthropogenic CO2 was sufficient to influence Earth’s temperatures prior to 1950, i.e. “Climate model simulations that consider only natural solar variability and volcanic aerosols since 1750—omitting observed increases in greenhouse gases—are able to fit the observations of global temperatures only up until about 1950.” NASA Earth Observatory “The observed global warming of the past century occurred primarily in two distinct 20 year periods, from 1925 to 1944 and from 1978 to the present. While the latter warming is often attributed to a human-induced increase of greenhouse gases, causes of the earlier warming are less clear since this period precedes the time of strongest increases in human-induced greenhouse gas (radiative) forcing.” NASA Geophysical Fluid Dynamics Laboratory / Delworth et al., 2000 “Internal climate variability is primarily responsible for the early 20th century warming from 1904 to 1944 and the subsequent cooling from 1944 to 1976.” Scripps / Ring et al., 2012: “There exist reasonable explanations, which are consistent with natural forcing contributing significantly to the warming from 1850 to 1950”. EPA
If you look at Global CO2 Emissions from Fossil-Fuels;
[caption id="" align="alignnone" width="542"] EPA – Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy – Click the pic to view at source[/caption]
Global CO2 from Fossil-Fuel Emissions By Source;
[caption id="" align="alignnone" width="542"] Carbon Dioxide Information Analysis Center – Click the pic to view at source[/caption]
and Cumulative Anthropogenic CO2 Emissions from Fossil-Fuels,
[caption id="" align="alignnone" width="542"] Carbon Dioxide Information Analysis Center – Click the pic to view at source[/caption]
you can see that Anthropogenic CO2 Emissions from Fossil-Fuels did not become potentially consequential factor until approximately 1950, and then grew rapidly thereafter. Even the IPCC tacitly admits this, in that in AR5 they only claim to be ” extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century”.
Now that I’ve answered your question, can you answer mine? “The rate of sea level rise before and after ~ 1950, when anthropogenic CO2 emissions became potentially significant, appear to be essentially identical. Where do you see any anthropogenic influence on sea level and can you present any evidence to support this assertion?”
justthefactswuwt says:
“you can see that Anthropogenic CO2 Emissions from Fossil-Fuels did not become potentially consequential factor until approximately 1950”
That is very strange. In 1950 the increase for forcing from CO2 was up 0.56 w/m2 compared to level before the industrial revolution. That is 1/3 of the increase in 2010. But that increase could not influence heat accumulation and sealevel rise.
Another issue here is of course how justthefactswuwt now have no objections to making up data for the whole ocean with infilling from gauges along the cost. Making up data with infilling used to be a problem for jtfw. Not with sealevel rise. Or one Greenland proxy.
rooter says:
Making up data with infilling used to be a problem for jtfw.
JustTheFacts is not making up data.
rooter aka makesnosense
That is very strange. In 1950 the increase for forcing from CO2 was up 0.56 w/m2 compared to level before the industrial revolution. That is 1/3 of the increase in 2010. But that increase could not influence heat accumulation and sealevel rise.
How about you present any credible evidence that CO2 was consequential prior to 1950, e.g. a link to a paper, an IPCC assessment report, something beyond your own conjecture?
Another issue here is of course how justthefactswuwt now have no objections to making up data for the whole ocean with infilling from gauges along the cost. Making up data with infilling used to be a problem for jtfw. Not with sealevel rise. Or one Greenland proxy.
Seriously, I am supposed to challenge the quality of every data source in every comment I write? Let’s put it this way, all the data is crap, our understanding of earth’s climate system rudimentary, our measurement capabilities are extremely limited and our data records in most cases are laughably brief. Taken from an objective perspective there is no credible evidence of the influence of anthropogenic CO2 emissions on Earth’s climate. Try to make some sense on that…
justforthefactswuwt says:
“rooter aka makesnosense
That is very strange. In 1950 the increase for forcing from CO2 was up 0.56 w/m2 compared to level before the industrial revolution. That is 1/3 of the increase in 2010. But that increase could not influence heat accumulation and sealevel rise.
How about you present any credible evidence that CO2 was consequential prior to 1950, e.g. a link to a paper, an IPCC assessment report, something beyond your own conjecture?”
I just did. In the very quote you supplied. 1/3 of the increase in forcing in 2010 was in place in 1950. IPCC is not denying that of course. And if you really want to have IPCC as your authoritative source, that is OK.
justforthefactwuwt says:
“Seriously, I am supposed to challenge the quality of every data source in every comment I write? Let’s put it this way, all the data is crap, our understanding of earth’s climate system rudimentary, our measurement capabilities are extremely limited and our data records in most cases are laughably brief. Taken from an objective perspective there is no credible evidence of the influence of anthropogenic CO2 emissions on Earth’s climate. Try to make some sense on that…”
If you want to use infilled data that is ok with me. But then again you could stop having objections to infilling.
You just presented credible evidence of the influence of anthropogenic CO2 emissjons on Earth’s climae. Simultaneous sealevel rise and CO2 rise.
rooter
I just did. In the very quote you supplied. 1/3 of the increase in forcing in 2010 was in place in 1950. IPCC is not denying that of course.
Right, 1950, that’s just when the cooling began. How much forcing was in place in 1910 when the warming began?:
[caption id="" align="alignnone" width="578"] National Oceanic and Atmospheric Administration (NOAA) – National Climatic Data Center (NCDC) – Base Period 1901-2000 – Click the pic to view at source[/caption]
And if you really want to have IPCC as your authoritative source, that is OK.
Yes, even they disagree with you…
rooter
If you want to use infilled data that is ok with me. But then again you could stop having objections to infilling.
You just presented credible evidence of the influence of anthropogenic CO2 emissjons on Earth’s climae. Simultaneous sealevel rise and CO2 rise.
If you did any more hand waving you’d probably take off. I would try to explain why correlation doesn’t prove causation, but I am sure that it would be lost on you…
hey warren:
We are all wondering…in what field of study did you get your [Masters]?
We are waiting for your answer.
What was the subject of your thesis?
What have you published since getting your degree?
In what peer reviewed journals can we find you work?
Since you choose to opine on this subject we appeal to your authority; just what is your position of knowledge?
Finally, whats your real name, so we can run google scholar search on your work?
the world waits.
davideisenstadt (challenging warrenlb)
I am willing to grant warrenlb has attended college. Perhaps he has written a Masters or Senior project.
Who “pal-reviewed” it? Is it on line or available in his college’s library? (Mine is, for example.)
But, more important, for this self-called “authority” quoting “authorities” as his Bible (er, religious convictions):
How many peer-reviewed articles and papers has HE completely read?
12?
120?
1200?
How many has HE read only the summary – that first paragraph where exceptions and “oh-by-the-ways” and assumptions and approximations and “did-not-study-this” (or “we studied only these three months in two voyages to the continent” are cleverly NOT stated?
12?
120?
240?
1024?
2048?
How many articles has HE read more than once?
2?
20?
40?
How many has HE called or written the authors and challenged assumptions or the conclusions or the methods?
1?
2?
4?
48?
We ask what warrenlb’s background is because we do not believe he actually understands the subject. ANY PART of the subject, much less the whole and all of its politics and all of its economic impacts on the world.
I read 2-3 papers daily (in their entirety) and often the summaries of more than 50 every week. And yes, I DO call and challenge and question the authors of those papers I quote.
(It gets bad when you can recall the problems with the supposed results and the limits of the measurements and the restrictions on the conclusions of a paper written in 1972 just by seeing the author’s name and year when it is cited by somebody else: Oh, yeah, that guy did not include the effects of diffuse radiation, that team only did a paper summary but had no measurements – but it doesn’t matter because they all used the same sources I already rejected for other reasons, that guy cut off measurements at 10 degrees, that guy is merely re-writing the paper already cited using the same results in the previous 4 papers by his team, that paper had different results for their west-bound journey than their east-bound surveys because of light reflecting off-of the ship, that guy reported evaporation results – but for freshwater still lakes high in the Andes mountains with far different air pressures – not for the Arctic but maybe I can use it anyway, that guy duplicated what she did in this other paper because their plots are exactly the same, that one result was new but he did not have any corrections for wind speed, that guy measured wind speed but it was fresh water summer months only, that team excluded low altitude data, those guys merely used the theoretical pure-water laboratory equations, that guy was good, but his team mixed the diffuse and direct radiation plots up in his final calculation, that guy reported only area approximations that mixed snow and ice and water together …. Those guys were good, but they reported only whole-month approximations, not individual daily measurements.)
“because we do not believe he actually understands the subject. ANY PART of the subject,”
…
Just because you do not agree with the position taken by someone does not give you the right to assert that they do not understand the subject. He/she might know much more about it than you do.
warrenlb has established nothing. Other than that over 280 replies he demands everybody follow HIS chosen “authorities” – even when such actions would be – ARE! – responsible for the deaths of millions of innocents, the harm to billions. For no tangible good, only imagined potential results that explicitly cause harm and injury, while not even affecting the cause that he fears!
But HE demands WE follow and obey only HIS government-paid self-selected “experts” because THEY claim THEY are the ONLY “experts”?
For example Mr RACookPE1978
…
All the papers you read, alll the articles you write, mean nothing when you post a comment such as this…
..
“Those who are to blame for the 60+ million dead killed BY the ban on DDT are desperate to avoid responsibility for their direct role in that massacre.”
…
That comment revels to all of us you don’t have a clue what the BAN covered. The “ban” wasn’t a global ban, did you know that? Did you know that you can still use DDT for vector control? No? Seems your sources of information are a bit….what the word…..oh….lacking.
…
RACook is right, of course. The U.S. led the way in stopping the use of DDT. Our refusal to allow its use was copied by most countries, leading to literally millions of deaths. “Socrates” emits more ignorance:
…you don’t have a clue what the BAN covered. The “ban” wasn’t a global ban, did you know that?
In fact, the U.S. caused a very effective global ban on DDT. Without going into all the details, the basic facts are these: when the U.S. EPA unilaterally banned DDT at the demand of ‘environmentalists’, the UN followed by placing authority for DDT use and distribution under the control of the U.S. Agency for International Development [USAID], which said it would stop sending U.S. foreign aid to any country using DDT. Since there were other insecticides available [although they were much more expensive, more persistent, and with much more serious side effects], and since foreign aid was immense compared to the DDT budgets, practically every country promptly stopped using it. [There are plans now afoot to completely phase out DDT. The unquestioned result will be that more poor people are killed.]
Just because the ban was not officially worldwide, the effect was just the same. The U.S. produced almost all DDT manufactured worldwide, exporting 60%. Political appointee Wm Ruckleshaus unilaterally made the DDT ban worldwide by decree [sound familiar?]. There are plenty of statistics showing what happened as a result: millions upon millions of poor people died from malaria. A common thread that runs through all environmental organizations is that they want lots of 3rd world people eliminated. Only the most naive and credulous cannot see that.
There is no credible dispute about any of this. The Sierra Club director applauded banning DDT, stating that “By using DDT, we reduce mortality rates in underdeveloped countries without the consideration of how to support the increase in populations.” So they encouraged deaths in underdeveloped countries in the same way that Margaret Sanger used birth control to reduce the African American population in the U.S. [I don’t want to get into a food fight over those facts, there is a mountain of evidence proving it.] Some may disagree because they are totally closed-minded, and thus blind to anything outside of their belief system. But in fact, DDT is about as harmless as a compound can be to humans and animals. People have publicly swallowed a tablespoon of DDT a day for months, without any side effects. It saves lives by the millions. But you still can’t go out and buy it in most places, even though it is nearly as harmless as CO2.
Finally, RACook is doing it the right way: by constantly reading the literature. Most alarmists made their decision early on, and they cannot change their minds now because they aren’t skeptics — the only honest kind of scientists. Instead, as the great writer Leo Tolstoy said, they are people who got on the wrong track, then told their pals what to think, and now they’re stuck feeling like chumps if they admit they were wrong. So they dig in their heels, cover their ears, and do the “LA-LA-LA-LA, I can’t he-e-e-e-ear you!” routine:
Dbstealey…
..
You can still buy DDT from India today.
The “ban” doesn’t stop them from making it.
Thank you for proving my point. India and other countries now manufacture their own DDT. They have had 40+ years to gear up. But when Ruckleshaus banned it, almost all DDT was produced by the U.S., which exported the major part of it’s production. Citation on demand. From anyone but you or your sidekick.
One metric of the importance of the “Pause” is the Warmists’ frantic multitude of the-dog-ate-my-homework “explanations” for it.
The big problem alarmists face is that there is a physical method to calibrate thermometers: water boils at 100 degrees and freezes at 0 degrees, but they will try to adjust that to fit their models.
Finally, somebody noticed. I have been saying this since 2010 when my book (What Warming?) came out. Goes to show that climate “scientists” either don’t know or don’t care about the state of data they are working with. That is the most benign interpretation I can think of. It is also possible that they were never taught how to do a scientific literature survey for articles they publish. And worse yet, it could be a deliberate attempt to suppress the existence of the hiatuses like the one we are living through. Yes, I said hiatuses in plural because there was one in the eighties and nineties they have successfully suppressed. I tend to favor the last interpretation because all the available information points to that. It is a colossal data falsification involving a collaboration of GISS, NCDC, and HadCRUT data sources. Starting in 1979 (or maybe earlier) they have given their temperature curves an upward push. It amounts to 0.1 degrees Celsius for the time interval 1979 to 1997. I discovered that while working on my book and even put a warning into its preface but they kept right on going into the twenty-first century. It has gotten so ridiculous that in their temperature curves the 2010 El Nino is now taller than the 1998 super El Nino which is impossible according to satellites. The eighties and the nineties were a period where ENSO was active and it created a wave train of five El Nino peaks there, with La Nina valleys in between. In a situation like this the global mean temperature is determined by drawing a straight line from the tip of an El Nino peak to the bottom of its neighboring La Nina valley and putting a dot in its center. These dots define the global mean temperature for the specific dates involved. Connecting the dots gives the global mean temperature trend. I did this with all the El Nino peaks in the satellite record and found that the dots lined up in a horizontal straight line. See figure 15 in my book. This makes it an 18 year no-warming stretch, equivalent in length of the current hiatus. In between these two no-warming periods is the step warming of 1999. In only three years it raised global mean temperature by a third of a degree Celsius and then stopped. It is the only warming we have had since 1979. As a result, 21st century temperatures are all higher than twentieth century termperatures, with the exception of 1998. This gave a chance to people like Hansen to misinterpret the meaning of “highest” temperture in climate science. To apply it to the total no-warming period covered by IPCC we have to add these two no-warming periods. This gives us 36 no-warming, “hiatus” years that together make up three-quarters of the time that the IPCC has been in existence. My original data were derived fro HadCRUT3 (figure 24) but after the book went to press I discovered that the three ground-based data-sets had cooperated in creating this warming. Computer processing was involved, and unbeknownst to them the computer had left its footprints in identical locations in all three data-sets. These computer traces comprise sharp upward spikes at the beginnings of years. One of them even sits directly on top of the super El Nino. These are nominally independent data-sets. residing on two sides of an ocean. Clearly, they cannot be trusted and satellite data should be used whenever possible. This has further implications for determining which year was warmest. We know it was the super El Nino year of 1998 but these incompetents chose 2014, based on it being higher than the El Nino of 2010 in their dataset. Which is only possible because thanks to falsification that El Nino stood higher than the super El Nino of 1998.
Arno see the Peak in the Millennial warming in the RSS data in 2003 and the cooling since then referred to in earlier comments
http://www.woodfortrees.org/graph/rss/from:1980.1/plot/rss/from:1980.1/to:2003.6/trend/plot/rss/from:2003.6/trend
Arno – and the true relationship of the 1998 and 2010 El Ninos
But this can’t be so! the WMO has just released its report that not only was 2014 the hottest year on record, but 14 of the 15 hottest years since the late 1800s were in the 21st century! Just follow the link and you can see for yourself!
https://www.wmo.int/media/?q=content/warming-trend-continues-2014
Certainly the world government, er, United Nations wouldn’t lie to us, right? Certainly every year this century was hotter than the 1930s by a significant margin ’cause they wouldn’t lie to us, right? Do I really need to use the sarcasm tag?
It might help if you understood what you were reading. I’m guessing you are from the US which means there is a good chance you are making the common mistake of confusing US temperatures with global temperatures.
There is strong evidence that the 14 of the 15 warmest years GLOBALLY since 1880 have occurred since 2000. While that fact is not necessarily inconsistent with a ‘pause’ in warming, certain climatic variables (e.g. ocean oscillations, solar activity) suggest we should have observed noticeable (statistically significant) cooling over the past 15 years.
I don’t go along with the catastrophic AGW scenario, but I do expect steady warming to resume again in the next 5-10 years.
this quote from Trenberth appears to be a later addition to Revkin’s piece from when i originally linked to it:
NYT Dot Earth: Andrew C. Revkin: A Fresh Look at the Watery Side of Earth’s Climate Shows ‘Unabated Planetary Warming’
In an email exchange, Kevin Trenberth of the National Center for Atmospheric Research said he was concerned that the analysis, limited to data from the relatively sparse array of Argo devices, was missing large areas of the seas that other studies, including his own, have identified as significant. As a result, he said, “their estimates look low-balled”. Here’s more from Trenberth:
“It is disappointing that they do not use our stuff (based on ocean reanalysis with a comprehensive model that inputs everything from SST, sea level, XBTs and Argo plus surface fluxes and winds) or that from Karina von Schuckmann.” [Trenbert (sic) pointed me to two studies, here and here.]
http://dotearth.blogs.nytimes.com/2015/02/02/a-fresh-look-at-the-watery-side-of-earths-climate-shows-unabated-planetary-warming/?_r=0
joanne nova under attack in the comments, which are accessed at the top of the article next to the date.
justthefactswuwt February 3, 2015 at 7:58 am says
The real climate question humans should be grappling with is, when is the current interglacial going to end and what, if anything, can we do about it?
a) when sun enters an inevitable centuries long ‘Maunder Minimum’.
b) Yes, dredging some of 800m deep sediments of the Denmark Straits in order to allow more Arctic cold water overflow into N. Atlantic.
? !
(more of cold Arctic flows into the Atlantic, more warm Atlantic can then flow into the Arctic).
Interesting, how about:
a) a very large sulfur dioxide rich volcano in the Northern hemisphere that results in a volcanic winter that we can’t recover from this late in the Milankovitch cycle.
b) a huge fleet of soot spraying planes to paint the Arctic snow and ice black, especially the edges of the proceeding ice sheets, in order to try to beat them back and artificially increase Earth’s albedo.
Northern volcanoes (Iceland, Aleuts and Kamchatka) also throw lot of ash into atmosphere causing temporary loss of atmospheric transparency, but eventually lot of it gets deposited around Arctic increasing its albedo, and at the same time the mineral content reduces melting temperature of the snow, sharp drop followed by the steep temperature rise (case of the CET ).
In the recent WUWT publication lot was said about Milankovic cycles. Planetary configuration (mainly Jupiter & Saturn) is responsible for the Earth’s orbital instability, but at the same time (according to vukcevic, these two planets as he says so himself here ) are responsible for the sun’s grand and not so grand minima, but of course Milankovic was not to know that, thus attributed sudden collapses of the temperatures to the gradual reduction in the insolation.
On more serious note, further warming may open vast areas of Canada and Siberia to agriculture, while even moderate cooling could plunge population of many countries into poverty.
Just imagine: according to the bonny bedlam boyz of SkS, the climate system has accumulated over 2 billion Hiroshimas worth of extra heat since 1998. Yowzer. And nought to show for it. It has simply vanished.
No wonder they are cuckoo for cocoa puffs.
Trouble with the Land Based Record?
http://img.brainjet.com/slides/1/3/1/2/5/0/1312502299/08bb813b22c6ed49735233a5778a744ba584c1e1.jpeg
Say it ain’t so.
Maybe it’s degrees Kelvin?
Steven Mosher February 3, 2015 at 9:02 am
wrong. The code is out there.
What a great way to say it. Just like X-files and just as ephemeral. Where is the code!? Provide a link or a contact person who can send it. In addition, the code is only part of the problem. After seeing the Harry-Read-Me document, I wonder if the annotations and metadata that are necessary to apply appropriate “adjustments” are reliable. So, maybe you could tell us where to find raw data with annotations and metadata along with the code and a description of how each type of adjustment was selected (was it on the basis of trends that seemed “wrong”, unlikely jumps in averages, annotations indicating a site had been moved or that a sensor had been changed or that the time of day of measurements had changed?). I know some of this is discussed in a general way in a few papers, but without the raw data, the annotations, the metadata, the code, and the criteria used in each case to select one adjustment vs another adjustment, why should anyone trust the final numbers? Why don’t climate scientists insist on getting their act together and making sure all of this is readily accessible? When I publish papers that include microarray data, all the journals I have published in require that the raw data be deposited in a public database before final acceptance of the paper. Typically, it is also required that if the software used to analyze the data is not commercially or publicly available it has to be included in the supplemental data for the paper. Why aren’t climate scientists interested in this level of accountability? One thing we agree about: the satellite groups also should make everything publicly accessible. If everything is so reliable, I would think there would be a rush from all concerned to get it out to anyone who is interested. How can it be viewed as anything but suspicious that this is not the case?
Steven Mosher February 3, 2015 at 9:02 am
wrong. The code is out there.
================================
Maybe Steve M should just explain how “the code” adjusted the Iceland stations. That should be much easier then explaining the entire code.
He actually provided a link for the code in another comment. I intend to take a good look at it.
From Pat at 5:40 AM
“NATURE CLIMATE CHANGE FOR RELEASE: FEBRUARY 2, 2015
Researchers led by Dean Roemmich, a physical oceanographer at Scripps Institution of Oceanography, UC San Diego, found that the top 2,000 meters (6,500 feet) of the world’s oceans
warmed at a rate of 0.4 to 0.6 watts per square meter (W/m 2)
between 2006 and 2013. The rate translates to a warming of roughly 0.005° C (0.009° F) per year in the top 500 meters of ocean and 0.002° C (0.0036° F) per year at depths between 500 and 2,000 meters.”
Last I checked w/m^2 is energy not heat. these people really need to keep the units straight
[snip – Epiphron Elpis is yet another David Appell sockpuppet.]
Werner , why do you not include the temperature data from Weatherbell, which is probably more accurate since the grid they use is more refined?
I basically use what WFT has, or at least what it used to have. Hadcrut4.3 is not shown yet.
[snip – Epiphron Elpis is yet another David Appell sockpuppet.]
If you’re looking for maximum temperature and you have an instrument that measures temperature more frequently (electronic thermometer) won’t you naturally catch any small, short increases that the glass thermometer averages away? That could easily explain a 0.9 degree C difference… Has anyone done a proper study on this?
I understood it to be some sort of a calibration problem and not an issue with a momentary blast of warm air from somewhere.
[snip – Epiphron Elpis is yet another David Appell sockpuppet.]
That is an excellent question. A previous post here may have some answers:
http://wattsupwiththat.com/2015/02/01/uncertainty-ranges-error-bars-and-cis/
As well, my section 2 above goes over some things.
As well, Nick Stokes has much to read here and with further links to other things:
http://www.moyhu.blogspot.com.au/p/moyhu-index.html
[snip – Epiphron Elpis is yet another David Appell sockpuppet.]
According to NOAA, a pause is statistically significant if it goes for 15 years or more at the 95% level:
”The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
From section 2 and rows 9 and 10 of the table, only GISS has a statistically significant pause shorter than 15 years.
[snip – Epiphron Elpis is yet another David Appell sockpuppet.]
Epiphron Elpis,
You asked, Werner answered with citations… and you go all ad hominem on him.
Instead, you could just say, “Thanks.”
Pot meet kettle
Yes, you and Epiphron have that in common.
No Dbstealey, I don’t call people names like you do
Ms socks,
You’ve specifically lumped me in with Holocaust deniers — mass murderers. There aren’t any more vicious names you can call anyone — bar none.
But if silly labels like ‘jamoke’ [someone who wastes his time in a coffe shop all day] are more than you can bear, please fill this out and submit it:
http://i0.kym-cdn.com/photos/images/original/000/060/713/butthurt-form.jpg
☺ ☺ ☺
Dbstealey
…
“You’ve specifically labeled me with Holocaust deniers”
…
Please post a link to my comment where I did that.
What, you demand that I go back and find “denier” comments? LOL.
What’s your billing address? I charge by the hour.
You made the assertion, now back it up with a citation.
Epiphron Elpis February 4, 2015 at 12:20 pm
“Werner, you really need to learn how to calculate statistical significance. It’s not the complicated.”
It’s more complicated than you seem to understand. What is your null hypothesis?
You can test that the trend fails to be significantly different from zero. But that is a failure, doesn’t prove anything useful. Or you could try to show that some prediction of non-zero trend can be rejected. What would you choose?
This is amusing.
Beg.
Maybe that will make a difference. ☺
Nick Stokes
You assert
Say what!?
It is very, very useful to determine “that the trend fails to be significantly different from zero” because it does “prove” there is no discernible trend at the stated level of confidence.
When alarmists claim warming is a problem then it is “useful” to determine if the warming is sufficient to be discernible because that informs about the validity of their claim.
Richard
Mr. “Socrates” FYI, your email as plugged into the comments form is invalid. By policy, a valid email address is required to comment here. Please correct it.
Sorry, we were unable to deliver your message to the following address.
:
Remote host said:
550-5.1.1 The email account that you tried to reach does not exist. Please try
550-5.1.1 double-checking the recipient’s email address for typos or
550-5.1.1 unnecessary spaces. Learn more at
550 5.1.1 http://support.google.com/mail/bin/answer.py?answer=6596 os5si10424744pab.197 – gsmtp
[RCPT_TO]
“all satellite records still show a pause of several years. Due to a relatively constant adiabatic lapse rate, this is very puzzling. Do you have any suggestions as to why there is this discrepancy?”
Anthony Watts has been talking about it for many years. IMO the discrepancy in satellite and land temperature measurements is due to urban heat island effect. Satellites are far away from radiating surfaces. Another possible cause is natural ocean cycles. Sea surface temperature measurements are taken from 10 to 70 feet deep. ENSO index is mostly positive in 2002-2007.
http://pmel.noaa.gov/co2/files/multivariateensoindex.gif
Of course the sea was extremely record warm this year. But that did not seem to affect the satellites for some reason.
Yes because satellites measure the top 0.01 mm layer of the ocean. By the way, if you want to detect greenhouse warming, satellites are the best since they measure temperature of lower troposphere. The greenhouse effect originates in the troposphere and heat is transmitted to the surface by infrared radiation.
RSS’s site is really bitchy, complaining about “denialists”
Very unprofessional for self-proclaimed scientists..
We are the ones looking at their actual data.
The other elephant in the room (possibly not mentioned in these comments), is that weather balloon global temperature trend measurements are in fairly close agreement with sats. That is unlikely to be coincidental.
Brozek:
This is the last month you can have this kind of post in this present form. At UAH’s data site I can see no update of version 5.5. Only 5.6. 5.5 seems to be discontinued.
Version 5.6 has a positive trend from 2005 already in 2014.
Will be interesting to see what your solution will be. Only show RSS? That will at least make it very clear how the RSS is the outlier.
Like all true believers, rooter is desperate to claim global warming where there is none.
If UAH is using version 5.5, why does Roy Spencer only ever refer to version 5.6 when posting monthly LT anomalies. e.g
http://www.drroyspencer.com/2015/02/uah-global-temperature-update-for-jan-2015-0-35-deg-c/
Spencer has been reporting the 5.6 version from July 2013. Now 5.5 seems to have been discontinued. Woodfortrees need to update UAH and Hadcrut4.
Neither UAH or RSS show 2014 to be anywhere close to being as warm as 1998.
David A says:
“Neither UAH or RSS show 2014 to be anywhere close to being as warm as 1998”
Of course not. 2014 was not the strongest ninjo year in 100 years. It did not even qualify as a ninjo-year.
In response to comments above, version 5.5 can be found here. It is updated from 1 to 4 days after the 5.6 anomaly is given.
http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.5.txt
Perhaps. But why do you ignore the version that Spencer prefer? Do you know more about his product than he does?
[this comment is coming from a proxy server, and has a fake email address along with the fake name, policy violation, added to banned list -mod]
I do mention both in my article. However WFT only allows me to graph version 5.5.
Sure gives impression of cherrypicking. Use the preferred version and you will have only one index left to present.
rooter sure gives the impression of being a pest. No answer ever satisfies him, it’s always like the kid who says, “But why…” to everything he has been told.
David A says:
“Neither UAH or RSS show 2014 to be anywhere close to being as warm as 1998″
Rooter says, “Of course not.”
=============================
Thank you for admitting that 2014 was not the warmest ever. Of course rapidly increasing sea ice in the arctic, above average global sea ice, record SH sea ice, record cold great lakes, near record NH snow cover, all indicate the satellites were more accurate then the maladjusted surface record.
“Do you have any suggestions as to why there is this discrepancy?”
Figures don’t lie. Liars figure.
Walter Sobchak:
One could argue that Walter is engaged in the later here himself, by engaging in clear examples of cherry picking.
Spencer has a good explanation of the discrepancy. The resolution is the cherry picked data that Walter Sobchak has decided to put so much weight on, is wrong.
@RACookPE1978.
You said:
“warrenlb has established nothing. Other than that over 280 replies he demands everybody follow HIS chosen “authorities” – even when such actions would be – ARE! – responsible for the deaths of millions of innocents, the harm to billions. For no tangible good, only imagined potential results that explicitly cause harm and injury, while not even affecting the cause that he fears!”
And: “But HE demands WE follow and obey only HIS government-paid self-selected “experts” because THEY claim THEY are the ONLY “experts”?”
Quite a rant, RACook. Let’s point out a couple of things:
1) I don’t demand anything, I post information about the Scientific consensus, and the conclusions of that consensus. We know by now that such information upsets you. Don’t fret – it’s just facts.
2) You say MILLIONS HAVE DIED because of scientific research and consensus on AGW? Really? You indeed have lost it.
3) Government paid experts? The 10s of thousands of peer reviewed papers summarized in the IPCC reports are products of independent researchers from ALL OVER THE WORLD. And the IPCC funds no research and does no research itself.
Once again, you’re implying a worldwide conspiracy of money and enforced conformity of conclusions by ALL the Institutions of Science, worldwide. We can see why you fall back on such a ludicrous claim –it appears to be all you’ve got once you see that the World of science is not on your page.
Warren, peer reviewed science rejects most every assertion within the IPCC. All the CO2 induced harms feared in the IPCC are based on the WRONG, consistently to warm models, and in the real world, those harms are failing to manifest. There are hundreds of solid peer reviewed reports ignored by the IPCC. There have been books written about the IPCC scandals and use of non peer reviewed alarmist literature.
Further, the IPCC ignores many solid peer reviewed reports demonstrating the benefits of increased CO2.
The fact that money corrupts should not be seen as any silly conspiracy theory, but a well know trait of human nature. The fact that people in general, and governments in particular, seek group power, should not be a shock to anyone but the most ignorant.
Now please show me the evidence of your purported consensus, but first define what the consensus is. Free warning here. If you fail to put the “C” in “CAGW” then your consensus is meaningless for public policy.
You see Warren, this is the consensus you need to find, but is in truth MIA, just as the “C” the “G” and the “W” are MIA in CAGW,
New York Times – November 18, 2007
…..The IPCC chairman, RajendraPachauri, an engineer and economist from India, acknowledged the new trajectory. “If there’s no action before 2012, that’s too late,”Pachauri said. “What we do in the next two to three years will determine our future. This is the defining moment.”…..
Warren are you finding the consensus on this?
@David A
You replied to my post to RACook, and you have it wrong:
My prior posts, referred to in my post to RACOOK that you replied to, asserted ALL the world’s institutions of science — Science Academies, Scientific Professional Associations, major Universities, NASA, NOAA – conclude AGW in published statements or studies. No one on this forum has been able to falsify my claim — for good reason, as there are no exceptions.
You also claim peer-reviewed scientific papers conclude AGW. I invite you to cite one. Only about 0.3% of the roughly 25000 published in the last few decades dispute any aspect of AGW; NONE dispute the overall conclusion that Man’s activities are warming the planet.
Nothing in your posts falsifies the above claims. Good luck finding anything.
Nothing in your posts even addresses my assertions. Your claims are meritless. There is no consensus on CAGW except the “C” is missing. Yes we all agree the world has warmed since we came out of the little ice age, and we agree that human influence has had an impact on climate. Now go find a consensus that that affect is a disaster. You will not.
Regarding the peer review literature I will give you just a little sample of one area, There are dozens of peer reviewed reports showing a lower climate sensitivity to CO2 then the IPCC uses, http://www.populartechnology.net/2009/10/peer-reviewed-papers-supporting.html#Sensitivity
The same can be done on any subject you name. The C in CAGW is missing, the benefits are known and manifesting.
Of course where you really go off the rails is ignoring the IPCC extensive use of grey literature in the majority of their chapters to support their manifestly wrong assumptions.
BTW the vast majority of your 25000 published papers have zero to do with the physics of earth’s atmosphere. They only engage the if and’s or maybe’s of the WRONG IPCC models, and say if the models are right and if there is an increase in drought here, this bad thing will happen. GIGO, models all the way down; a circle jerk of government funding pushing a political agenda.
Warren, I found more folk below to support your consensus, but still a long ways from the 32,000 plus scientist against it.
Gordon Brown: We have fewer than fifty days to save our planet from catastrophe
……..Copenhagen must be such a time.
There are now fewer than 50 days to set the course of the next 50 years and more. So, as we convene here, we carry great responsibilities, and the world is watching. If we do not reach a deal at this time, let us be in no doubt: once the damage from unchecked emissions growth is done, no retrospective global agreement, in some future period, can undo that choice. By then it will be irretrievably too late….
Guardian – 8 July 2008
100 months to save the Earth
There isn’t much time to turn things around. And today’s G8 announcements on climate change set the bar too low
……The world’s climate experts say that that the world’s CO2 output must peak within the next decade and then drop, very fast, if we are to reach this sort of long term reduction. In short, we have about 100 months to turn the global energy system around. The action taken must be immediate and far reaching……
[John Sauven – Greenpeace]
WWF – 7 December 2009
12 days to save the planet!
…“The world has given a green light for a climate deal. But the commitments made so far won’t keep the world under 2° of warming, This has to change over the next 12 days. …
[WWF-UK’s head of climate change, Keith Allott]
=======================
Guardian – 18 January 2009
‘We have only four years left to act on climate change – America has to lead’
Jim Hansen is the ‘grandfather of climate change’ and one of the world’s leading climatologists…..
“We cannot now afford to put off change any longer. We have to get on a new path within this new administration. We have only four years left for Obama to set an example to the rest of the world. America must take the lead.”
You see Warren, there is harmony between skeptics and alarmist, both know the IPCC models are crap…
“The data doesn’t matter. We’re not basing our recommendations
on the data. We’re basing them on the climate models.”
– Prof. Chris Folland,
Hadley Centre for Climate Prediction and Research
“The models are convenient fictions
that provide something very useful.”
– Dr David Frame,
climate modeler, Oxford University
”We need to get some broad based support, to capture the public’s imagination… So we have to offer up scary scenarios, make simplified, dramatic statements and make little mention of any doubts… Each of us has to decide what the right balance is between being effective and being honest.”
Stephen Schneider,
Stanford Professor of Climatology,
Lead author of many IPCC reports
Warren, Warren Warren, you really think nobody within the IPCC has a political axe to grind?
”Unless we announce disasters no one will listen.”
Sir John Houghton,
First chairman of the IPCC
”It doesn’t matter what is true, it only matters what people believe is true.”
Paul Watson,
Co-founder of Greenpeace
”We’ve got to ride this global warming issue. Even if the theory of global warming is wrong, we will be doing the right thing in terms of economic and environmental policy.”
Timothy Wirth,
President of the UN Foundation
”No matter if the science of global warming is all phony… climate change provides the greatest opportunity to bring about justice and equality in the world.”
Christine Stewart,
former Canadian Minister of the Environment
Dozens more if you like.
Warren are you here ? Sorry if this duplicates.
Warren, peer reviewed science rejects most every assertion within the IPCC. All the CO2 induced harms feared in the IPCC are based on the WRONG, consistently to warm models, and in the real world, those harms are failing to manifest. There are hundreds of solid peer reviewed reports ignored by the IPCC. There have been books written about the IPCC scandals and use of non peer reviewed alarmist literature.
Further, the IPCC ignores many solid peer reviewed reports demonstrating the benefits of increased CO2.
The fact that money corrupts should not be seen as any silly conspiracy theory, but a well know trait of human nature. The fact that people in general, and governments in particular, seek group power, should not be a shock to anyone but the most ignorant.
Now please show me the evidence of your purported consensus, but first define what the consensus is. Free warning here. If you fail to put the “C” in “CAGW” then your consensus is meaningless for public policy.
In regression analysis, it is necessary to give the 95% confidence range. As pointed out by McKitrick, no statistically significant warming since 1996 in the satellite data. And based weather balloon radiosondes, no statistically significant warming in 1979-2003.
“Over 1979–2003, the satellite‐equivalent tropical lower tropospheric temperature trend has likely (5–95% confidence range) been between −0.01 K/decade and 0.19 K/decade (0.05–0.23 K/decade over 1958–2003) with a best estimate of 0.08 K/decade (0.14 K/decade)”
http://www.metoffice.gov.uk/hadobs/quarc/Thorne-2011.pdf
It seems the lower troposphere was not been warming, or it is uncertain if there was warming. It may be that all observed warming since 1979 is UHI effect and PDO and ENSO. By the way, the temperature trend in 1947-1979 is flat or slight cooling.
Thorne et al best estimate of warming over 1979-2003 is 0.08 K/decade or 0.2 K over that period. My regression analysis of radiosonde HADAT2 over 1958-1978 gives a cooling of -0.15 K/decade or -0.32 over that period. Therefore, there is slight cooling of the lower troposphere over the period 1958-2003.
The statement you quoted implies that warming is ~20X times more likely to taking place than cooling so where do you get “it is uncertain if there was warming” from?
The probability of cooling is within the 95% confidence range. If I got two pair in a five card poker hand, would you conclude that I cheated? The probability of two pair is less than 0.05. The probability of cooling is higher than this. Scientific evidence requires higher than 95% confidence range.
I really wish we could get some hard evidence they are intentionally falsifying the ground measurements. We need an inside whistleblower to grow a spine.
Perhaps we already have that evidence. What we may need now is some sort of a science court that all sides can appeal to and which would decide if certain things could be justified or not.
Concerning sea level rise,
NOAA tidal gauge data show a flat sea level trend (meaning no rise in sea level) for the last 15-20 years, excepting those locales undergoing subsidence, such as the Chesapeake Bay area. See NOAA Mean Sea Level Trend chart for each of the gauge stations.
Yes, and so the gauges show no acceleration from the early part of the SL rise graph, long before satellite’s
So using the SAME methodology as used to produce the majority of the last centuries SL flux estimate, their has been a slowing down of SL rise.
Not puzzling at all why the discrepancy between surface data and satellite data. The massive warming adjustments that have been done to surface data, for no objective reason at all, explains all of the discrepancy. See Paraguay, Bolivia, USHCN, Rutherglen, Amberley, New Zealand etc etc etc.
Well, if this is warmest evah” then I don’t mind some more warming. Back in 2005 when I was getting interested in the claimed global warming stuff and found out what they did to adjust for the UHI effect. I realized they adjusted the data the wrong way. They cooled the past. That way they got more warming recently and that was and is wrong. Nowadays we have found out all the massage in all the data works the same way, so any claim to depending on any surface data I will not trust or even consider looking at. The only data reasonable trustworthy is the satellite sets and they show nothing of what the movement is throwing at us. Of course I never trust any movement, be it right, left or center. They all has a purpose hidden. But, the left side is the more dangerous one so whatever they has to say has to be dissected in every detail. So far they never ! been reliable and probably never will.
For example my wife think MST was good for the chinese people. But checking the old japanese war records reveals the red’s in China didn’t even fight the enemy under MST. Chang Kai Check did the fighting and lost most of the time because his army took all the losses. But he didn’t get any reward because he didn’t write the history books. This is the same we now have with the Global Warming records and it’s a shame we let this go on.
Update for UAH version 5.6: The January anomaly was 0.35 which would rank it in third place if it stayed this way. (Of course it probably won’t!) The period of time for a flat slope remains at 6 years, 0 months. It just got displaced by 1 month to run from February 1, 2009 to January 31, 2015.
Update for RSS: The January anomaly was 0.367 which would rank it in third place if it stayed this way. (Of course it probably won’t!) The period of time for a flat slope dropped to 18 years and 2 months from December 1, 1996 to January 31, 2015.
Update for Hadsst3: The January anomaly was 0.440 which would rank it in second place if it stayed this way. (Of course it probably won’t!) While its flat period is under a year and not worth mentioning, it is noteworthy that the anomalies have dropped every month since August.
Reblogged this on Climatism and commented:
Why is there this [temp] difference between the satellites and the [warmer/warming] surface measurements? Is one more accurate than the other?
Meterologist in Germany noted : “One reason for the perceived warming, Hager says, is traced back to a change in measurement instrumentation. He says glass thermometers were was replaced by much more sensitive electronic instruments in 1995. Hager tells the SZ:
‘For eight years I conducted parallel measurements at Lechfeld. The result was that compared to the glass thermometers, the electronic thermometers showed on average a temperature that was 0.9°C warmer. Thus we are comparing – even though we are measuring the temperature here – apples and oranges. No one is told that.’”