No Statistically Significant Satellite Warming For 23 Years (Now Includes February Data)

Guest Post by Werner Brozek, Comment Included From David Hoffer, Edited by Just The Facts:

WoodForTrees.org – Paul Clark – Click the pic to view at source

In the above graphic, the green line is the slope since May 1993 without consideration of error bars. When including error bars, the range could be as low as zero as indicated by the blue line. It could also be an equal amount above the green line as indicated by the purple line.

The numbers that were used to generate the above graphic are from Nick Stokes’ Temperature Trend Viewer site.

For RSS, the numbers are as follows:

Temperature Anomaly trend

May 1993 to Feb 2016

Rate: 0.871°C/Century;

CI from -0.022 to 1.764;

t-statistic 1.912;

Temp range 0.118°C to 0.316°C

So in other words, for 22 years and 10 months, since May 1993, there is a very small chance that the slope is negative.

For UAH6.0beta5, the numbers are as follows:

Temperature Anomaly trend

Jan 1993 to Feb 2016

Rate: 0.911°C/Century;

CI from -0.009 to 1.830;

t-statistic 1.941;

Temp range -0.001°C to 0.210°C

So in other words, for 23 years and 2 months, since January 1993, there is a very small chance that the slope is negative.

As mentioned in my January post, there is now no period of time going back from February 2016 where the slope is negative for any period worth mentioning on any of the five data sets I am analyzing.

As a result, my former Section 1 will not be shown for the foreseeable future.

My last post had an excellent comment by David Hoffer that I would like to share to give it wider exposure and for you to give your thoughts:

davidmhoffer

March 2, 2016 at 10:11 am

1. The “Pause” hasn’t disappeared. It now just has a beginning and an end. But it is right there in the data where it always was, and it doesn’t cease to exist merely because we can’t calculate one starting from the present and working backwards.

2. The “Pause” was never significant in terms of showing the CO2 doesn’t heat up the earth. It only became significant because the warmist community (Jones, Santer, etc) said that natural variability was too small to cancel the warming of CO2 for more than a period of 10 years…er 15…er 17 and made a big deal out of it.

So regardless of the “Pause” having ended or not, what we have is conclusive evidence that the models either:

a) grossly under estimated natural variability or

b) grossly over estimated CO2 sensitivity or

c) both

In all three scenarios above, natural variability dominates in terms of any risk associated with a changing global temperature. That’s what we should be studying first and foremost. Once we understand it, then we can determine how much CO2 changes natural variability. Trying to determine CO2 sensitivity without first understanding the natural variability baseline that it runs on top of is a fool’s errand. Unfortunately, fools seem determined and well funded, and so they continue to try and do just that.

The world has been warming for 400 years, almost all of it due to natural variability. It will continue to warm (I expect) and most of the warming will be due to natural variability, which we just learned from this last 20 years of data is a lot bigger deal than CO2.

(End of David’s post)

In the sections below, we will present you with the latest facts. The information will be presented in two sections and an appendix. The first section will show for how long there has been no statistically significant warming on several data sets. The second section will show how 2016 so far compares with 2015 and the warmest years and months on record so far. For three of the data sets, 2015 also happens to be the warmest year. The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.

Section 1

For this analysis, data was retrieved from Nick Stokes’ Trendviewer available on his website. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.

On several different data sets, there has been no statistically significant warming for between 11 and 23 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.

The details for several sets are below.

For UAH6.0: Since January 1993: Cl from -0.009 to 1.830

This is 23 years and 2 months.

For RSS: Since May 1993: Cl from -0.022 to 1.764

This is 22 years and 10 months.

For Hadcrut4.4: Since October 2001: Cl from -0.016 to 1.812 (Goes to January)

This is 14 years and 4 months.

For Hadsst3: Since May 1996: Cl from -0.002 to 2.089

This is 19 years and 10 months.

For GISS: Since March 2005: Cl from -0.004 to 3.688

This is exactly 11 years.

Section 2

This section shows data about 2016 and other information in the form of a table. The table shows the five data sources along the top and other places so they should be visible at all times. The sources are UAH, RSS, Hadcrut4, Hadsst3, and GISS.

Down the column, are the following:

1. 15ra: This is the final ranking for 2015 on each data set.

2. 15a: Here I give the average anomaly for 2015.

3. year: This indicates the warmest year on record so far for that particular data set. Note that the satellite data sets have 1998 as the warmest year and the others have 2015 as the warmest year.

4. ano: This is the average of the monthly anomalies of the warmest year just above.

5. mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year. The 2016 records are not included here.

6. ano: This is the anomaly of the month just above.

7. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.

8. sy/m: This is the years and months for row 7.

9. Jan: This is the January 2016 anomaly for that particular data set.

10. Feb: This is the February 2016 anomaly for that particular data set.

11. ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months.

12. rnk: This is the rank that each particular data set would have for 2016 without regards to error bars and assuming no changes. Think of it as an update 10 minutes into a game.

Source UAH RSS Had4 Sst3 GISS
1.15ra 3rd 3rd 1st 1st 1st
2.15a 0.263 0.358 0.745 0.592 0.86
3.year 1998 1998 2015 2015 2015
4.ano 0.484 0.550 0.745 0.592 0.86
5.mon Apr98 Apr98 Dec15 Sep15 Dec15
6.ano 0.743 0.857 1.009 0.725 1.10
7.sig Jan93 May93 Oct01 May96 Mar05
8.sy/m 23/2 22/10 14/4 19/10 11/0
9.Jan 0.542 0.663 0.899 0.732 1.14
10.Feb 0.834 0.974 1.057 0.604 1.35
11.ave 0.688 0.819 0.978 0.668 1.25
12.rnk 1st 1st 1st 1st 1st
Source UAH RSS Had4 Sst3 GISS

If you wish to verify all of the latest anomalies, go to the following:

For UAH, version 6.0beta5 was used. Note that WFT uses version 5.6. So to verify the length of the pause on version 6.0, you need to use Nick’s program.

http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt/tltglhmam_6.0beta5.txt

For RSS, see: ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt

For Hadcrut4, see: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.4.0.0.monthly_ns_avg.txt

For Hadsst3, see: http://www.cru.uea.ac.uk/cru/data/temperature/HadSST3-gl.dat

For GISS, see:

http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

To see all points since January 2015 in the form of a graph, see the WFT graph below. Note that UAH version 5.6 is shown. WFT does not show version 6.0 yet. Also note that Hadcrut4.3 is shown and not Hadcrut4.4, which is why many months are missing for Hadcrut.

WoodForTrees.org – Paul Clark – Click the pic to view at source

As you can see, all lines have been offset so they all start at the same place in January 2015. This makes it easy to compare January 2015 with the latest anomaly.

Appendix

In this part, we are summarizing data for each set separately.

UAH6.0beta5

For UAH: There is no statistically significant warming since January 1993: Cl from -0.009 to 1.830. (This is using version 6.0 according to Nick’s program.)

The UAH average anomaly so far for 2016 is 0.688. This would set a record if it stayed this way. 1998 was the warmest at 0.484. The highest ever monthly anomaly was in April of 1998 when it reached 0.743. This is prior to 2016. The average anomaly in 2015 was 0.263 and it was ranked 3rd.

RSS

For RSS: There is no statistically significant warming since May 1993: Cl from -0.022 to 1.764.

The RSS average anomaly so far for 2016 is 0.819. This would set a record if it stayed this way. 1998 was the warmest at 0.550. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. This is prior to 2016. The average anomaly in 2015 was 0.358 and it was ranked 3rd.

Hadcrut4.4

For Hadcrut4: There is no statistically significant warming since October 2001: Cl from -0.016 to 1.812. (Goes to January)

The Hadcrut4 average anomaly so far is 0.978. This would set a record if it stayed this way. The highest ever monthly anomaly was in December of 2015 when it reached 1.009. This is prior to 2016. The average anomaly in 2015 was 0.745 and this set a new record.

Hadsst3

For Hadsst3: There is no statistically significant warming since May 1996: Cl from -0.002 to 2.089.

The Hadsst3 average anomaly so far for 2016 is 0.668. This would set a record if it stayed this way. The highest ever monthly anomaly was in September of 2015 when it reached 0.725. This is prior to 2016. The average anomaly in 2015 was 0.592 and this set a new record.

GISS

For GISS: There is no statistically significant warming since March 2005: Cl from -0.004 to 3.688.

The GISS average anomaly so far for 2016 is 1.25. This would set a record if it stayed this way. The highest ever monthly anomaly was in December of 2015 when it reached 1.10. This is prior to 2016. The average anomaly in 2015 was 0.86 and it set a new record.

Conclusion

Warming does not become catastrophic just because we cannot go back from February 2016 and find a negative slope. This is especially true since it was a very strong El Nino and not CO2 that was mainly responsible for the negative slope disappearing for now.

Advertisements

  Subscribe  
newest oldest most voted
Notify of

My goodness. The “pause” should have graduated college by now. 23 years! ( I graduated at 22 years old)
And yet with no warming or very, very little warming; we see alarmism ramped up to all time high levels. Is there any doubt the whole charade is a political movement to increase the government’s control over the people?

CaligulaJones

“The “pause” should have graduated college by now”
Maybe like Bluto, its in the 23rd year of a four year degree.

Mark

Fear of Zombies is also at an all time high 😀

RHS

Perhaps the pause is Piling it Higher and Deeper?

Phil

Why do climate change believers quote the NOAA data and Climate Change skeptics quote RSS? Why is the actual Global temperature never mentioned? If 2015 was the hottest year on record, what was the temperature? Why do I never see climatologists on TV warning about AGW? What is the difference between climatologists and climate scientists? What was the pre-industrial temperature that we are trying to prevent going over by 2 degrees C? What is the average ice coverage of the Artic in square miles or kilometers? What was it last year and last decade? Why does the media not answer these questions? Why does the public not ask them?

Bloke down the pub

The pause hasn’t necessarily ended, as a strong La Niña will soon bring the mean back down.

Bloke at the Other End of The Bar

To right mate, no one should pay much attention to ternds that include a high point that is clearly part of an El Nino cyclic peak and a data set that starts at a clear low relative to the rest of the set.
I once saw a truly risible paper of sea level rise that had a clear, cyclical signal from the PDO. The data started on a trough and ended on a crest and then they calculated a trend line!! LOL You would get the same if you did that with a pure sine wave signal. It demonstrated absolutely nothing about sea level change but did demonstrate with crystal clarity the utter incompetence or pure unadulterated dishonesty of the authors.

george e. smith

” Statistically significant ” is an oxymoron.
Statistics is simply following the rules of some algorithm or other, in relation to a ‘ data set ‘ ; which is simply a finite set of finite real numbers.
For example suppose I had a data set of integers (one form of a ‘real’ number); specifically the set >> 5, 1, 9, 0, 4, 2, 7, 3, 8, 6 <<
Doesn't matter where I got these integers from; they just comprise a set I have an interest in.
Just for doodling, I take note of the fact that there are ten (10) elements in my data set. Lucky for me, that is a finite number of integers.
While doodling some more, I simply sum all of the elements in the set, and I get a sum of 45.
I start making a list of labels relating to my set. I decide to call the number (10) of elements in my set … n … (see how clever that is; n for 'number '.
Then I decide to call the number (45) for the total sum, of the elements … s …
I should write a text book about these doodlings.
Then in a fit of creativity, I ponder what would happen if I divided s by n as in s/n .
I try that and I get the number 4.5 or 4 1/2 in a different form.
Now that is interesting; the remarkable result of my doodling is that I have found a new number (4.5) which isn't even a valid member of my set, because 4.5 is NOT an integer.
Well that must be an important result, that doodling with the elements of a data set, can give you new numbers which don't have any business being involved with the set.
I'll call that number (4.5) the " average " of my data set, and give it the symbol ' a '.
I can't really use 'a' for anything useful, because it isn't even an integer, so If I tried to use it to count sheep (so I could get to sleep), there won't even be a sheep with the number 4.5 on its fleece anywhere.
But it is a number that I got by applying my algorithm a = s / n to my data set of integers, so I should tell people about it, as if it is something important; that's like saying it is
" significant ".
Well my data set of integers could care less about 4.5 or any other number that isn't an integer. The ONLY significance that ' a ' has is all in my head. I'm the one that thinks it's important, even though I can't even use it to count sheep !!
Human beings are the only entities in the entire universe, that think that an average; a , or ANY other statistical algorithmic result is significant. It's all in our head. The real universe pays NO attention at all to ANY result of statistical doodling.
G

The real universe pays NO attention at all to ANY result of statistical doodling.

Do not tell this to CNN viewers on election night.

TonyL

@ George

Well my data set of integers could care less about 4.5 or any other number that isn’t an integer. The ONLY significance that ‘ a ‘ has is all in my head. I’m the one that thinks it’s important, even though I can’t even use it to count sheep !!

The butcher down at the neighborhood grocery store has no problem with 4.5 sheep, or any fractional animal. Maybe you just need a more expansive view of things.

@George,
You point is well taken, but sometimes the average is very, very useful, and even predictable.
I got dragged to the dog track once by a brother-in-law with a gambling habit. Fortunately, the food was good, so I settled in with a nice lunch and a drink to watch several thousand people throw their money away. After some careful study of what was going on, and applying what math I had learned in my game theory, probability, and statistics classes, I was able to determine that most of the bets were horrible, as the expected (average) payoff of the day’s races expressed as a ratio of winnings divided by the wager amount when multiplied by the odds of winning was significantly less than one. This was true of all of the bets in the win, place, and show pool, and was even worse for the quinella and the trifecta pools.
But I did notice that unlike the horse track, the dog track always raced eight dogs at a time (ignoring the occasional scratch for a sick contestant), and the schedule was usually 13 to 15 races in a set (matinee or evening). Hmmm. Thirty roughly identical events might be enough to be statistically pliable. So we spent the entire day and night, collecting odds of winning and the average payoffs from 30 races and discovered an interesting fact – that it was possible to place a “to win” bet in the trifecta pool with a fancy and expensive box bet where the expected (average) trifecta payoff when multiplied by the blind odds “to win” yielded a ratio very nearly equal to one.
So all we had to do, based on the actual measured averages, was to improve our ability to pick the winning dog from the blind one-in-eight chance to a little better than one-in-seven, and we would make money. I am not stupid enough to believe that I can do that, as I know nothing about the dogs; not so much for the brother-in-law. Fortunately, I had noticed that there was a track affiliate selling a tip sheet for a dollar, and their prognostications were good enough to significantly improve the odds, somewhere between one-in-five and one-in-three depending on the day.
So now we have two averages (average odds, and average payout), which turned out to be very useful, and with the addition of some capital of our own, produced a third average; namely our winnings for the afternoon, usually about 150% in excess of what was wagered. Wow. The averages were worth real money … well, not enough to quit my day job of building advanced compute infrastructure platforms to perform computational studies in mapping electron fields from magnetic nuclear resonance and x-ray crystallography data to discover useful compounds for a big pharma company … but enough on average to get back every penny I paid for those fancy University degrees I studied for.
Cheers

george e. smith

By the way. Nobody should take my slapstick pokes at statistical mathematics, as ANY criticism of any of the data munching processes that go on in the various groups studying ‘climate data.’.
Whether it is Lord M of B’s Pause Algorithm, or the processes that RSS, or UAH follow; not even GISS or HADCrud.
I do respect the work these groups put into trying to consistently follow a process, in the hope of unlocking some secrets of what variables actually do make the climate tick.
My comic approach is merely a warning, that readers (and researchers) should be aware, that the real physical universe pays no attention whatsoever to either the graphs or the information they portray.
The first wood for trees graph in the present essay, conjures up all sorts of thoughts when humans look at it, even me.
It is a mistake to think that the real universe does anything of note in response to any of those squiggles. Some features clearly indicate something is happening.
There’s no possible way to say what the end result for the earth’s climate will be as a result of what is shown to have been calculated for that graph or others like it.
The probability rules that go along with statistics, are largely derived from nothing more than rational thinking. No experiment is needed to validate them.
If you THINK about flipping a coin (rather than actually doing it) it is quite rational to presume that the coin is quite symmetrical, so it is just as likely to fall heads up as tails up. From that we can deduce purely as a mind exercise, that any pattern of heads or tails that we get from tossing one coin ten times, has about one chance in 1024 of occurring, including ten heads in a row.
So you can shock your friends by telling them that if you put the 366 dates of the leap year on identical balls in a shuffler, and draw them out one at a time, that the MOST PROBABLE pattern you will get, is simply Jan1, Jan2, Jan3, …… Dec 29, Dec, 30, Dec31, In strict calendar order.
You might as well bet them a beer on it.
Well the chance of getting a strict calendar order draw is 1 in 366 !
One in factorial 366; a totally astronomical number.
So of course they will say bull bleep to that and take your beer bet.
Well of course, any possible drawing of 366 numbers one at a time, has exactly the same probability of occurrence, so even though it is only one chance in factorial 366, the calendar order drawing is indeed the highest probability drawing to come up.
So remember that the statistical machinations, are things that you do with the numbers that you or somebody else has observed.
But you start skating on thin ice, when you try to endow the results with some magical predictive properties. They have none, and it is all on you and your faith in what you expect might happen next as a result of what the numbers you already have, con you into believing might lead to in the future.
Now yes I firmly believe that the sun will rise in the East tomorrow morning, like it has always done in the recorded human history; at least on the surface of this planet.
But I have a theoretical model of the sun’s planetary system, that is a darn side more concrete, and has far fewer known or unknown variables that could likely affect the outcome; than does the RSS or UAH data set and the climate variables that affect it.
The rules of probability really apply to the distributions of results in a large number of trials.
The problem that creates in the game of life is that this is NOT a dress rehearsal. Nothing actually will ever happen again. Something similar in some way, may happen, but after something has happened the starting point for what will happen next is no longer the same, so the same outcome cannot occur again.
Anything that can happen, will happen; just as soon as it can happen. Nothing will happen before it can happen, and once it can happen, it won’t wait for instructions on when to do it.
So the rules of probability may be quite rational in a mental exercise; but we will never ever get to test whether a predicted probability event will actually happen at the calculate frequency, because it will never happen again.
G

Bindidon

You are right, La Niña is pretty good in preparation! But nevertheless the mean temperature though keeping a similar trend gets higher and higher:
http://www.moyhu.org.s3.amazonaws.com/2016/2/1998_0.png
(Do not hesitate to subtract 0.43 °C from the values you see on the plot: GISS’ baseline is at 1951-1980 as opposed to UAH which has 1981-2010.)
For example, when you compare the 1997/98 El Niño with the actual one, you see for RSS3.3
– the down peak in Apr 1997 at -0.304 °C, and at +0.076 in 2015;
– the up peak in Feb 1998 at 0.723 °C, and at 0.866 °C in 2016.
So for the down peak you have 0.380 °C elevation in 18 years, i.e. 0.211 °C / decade.
Not bad, hu?

Richard M

GISS is worthless. Why do you bother with that nonsense? If their past history is any measure the current values will eventually be corrected downward to make future temperatures look warmer.
A better comparison of the current state of affairs for El Nino is to use the satellite data from the Tropics. That is, the only real way to get apples to apples is to limit the comparison to where the effect is the strongest.
For the 3 months (the El Nino peak). The anomalies were
2016……….1998
.911………..1.134
1,089………1.314
1.115………1.116
Sure doesn’t look to me like there’s been much change over the past 18 years.

Tom O

I got a great idea. Lets start the graph at , say, 1930 and see if it still shows much of an upward trend. Interesting how we now base our “trend ” on the satellite data because it is helpful as opposed to the long range trend from the little ice age, as it was previously. And during “the pause,” the satellite data was useless. There is so much dishonesty in the presentation in science now that I don’t think anyone will ever believe it again. It has placed itself along side of politicians, the media, and prostitutes, all servicing the public in different ways for profit.

Bindidon

Tom O
The reason why I have baselined (or normalised) all my temp data @ home wrt the satellite era (1981-2010) simply is due to the fact that
– we need a range to create deltas wrt the range’s mean in order to properly compare these deltas;
– satellite data didn’t exist before 1979.
Where is the problem? It is just no more than a shift upwards / downwards. What is important are the deltas we compare alltogether. here’s a plot with 5 surface and 3 lower troposphere datasets:
http://fs5.directupload.net/images/160407/mpldxv5e.jpg
together with the same info as pdf you may scale by many 100%:
http://fs5.directupload.net/images/160407/gpyzqmz3.pdf
The “white” data is the mean of all eight.
Moreover: if you look at data starting around 1900, who cares about what the deltas are relative to?
During a “pause” all data is useless as long as the “pause” is kept sufficiently small: the uncertainty then mostly is higher than the trend itself.

Bindidon

Richard M
„GISS is worthless. Why do you bother with that nonsense?“
Do you really know enough about temperature measurement to write that?
Here is a plot of all the stuff:UAH 6.0beta5, RSS3.3, GISS for both periods ranging from
– jan 1997 to feb 1998
– jan 2015 to feb 2016.
http://fs5.directupload.net/images/160407/naigkfha.jpg
What you immediately can see is that between the two editions, there is an increase of about 0.4 °C. Within only 17 years. The rest: nearly all similar indeed…
Sure doesn’t look to me like there’s been much change over the past 18 years.
Sure, really? I would rather appreciate the two events having kept at the same temperature level!

Bindidon,
According to GISS itself, there is nothing either unusual or unprecedented happening. Your attempted scares are factoids produced by magnifying the axes to show something as scary, when it’s not.
Here is the normal and natural state of affairs, straight from GISS:comment image

AndyE

“Since 1993 there is a very small chance that the slope is negative” – But the alarmists don’t understand statistics, it seems.

Would you like to explain it?

george e. smith

It’s fun with numbers ! No need to explain having fun with numbers. Or just having fun.
G

AndyE

That’s simple, Nick Stokes : If there is a small chance that the slope is negative, that means that everything is too close to call – and it is stupid even to start worrying, let alone thowing billions of dollars towards a problem which statistically (i.e. realistically) isn’t even there.

catweazle666

Wassup Nick?
Can’t you handle the truth?

“natural variability was too small to cancel the warming of CO2 for more than a period of 10 years…er 15…er 17 and made a big deal out of it”.
How long do they think they can continue to move the goalposts, “adjust” the data and attempt to make criminals out of those who dispute the “settled science”?

AndyG55

I liken it to a piece of chamber music.. mono-volume.
Then there is a loud clap of thunder.. (El Nino)
But the chamber music is still going on its steady way, you just can’t hear it for short time.

“It only became significant because the warmist community (Jones, Santer, etc) said that natural variability was too small to cancel the warming of CO2 for more than a period of 10 years…er 15…er 17 and made a big deal out of it.”
No, they didn’t make a big deal out of it. They didn’t even say it. I’ve never seen what they actually said properly quoted here. The pause was not out of range relative to any of the statements that they in fact made.
So
“we have is conclusive evidence “
No, you don’t. Actually, the pause, insofar as it existed, was only significant in that it might have been a prelude to a downturn. But it wasn’t.

AndyG55

“But it wasn’t.”
Yet !

kim

Yeah, yet. Nick heaves a sigh of relief, for now the future is certain.
===========

catweazle666

“Actually, the pause, insofar as it existed, was only significant in that it might have been a prelude to a downturn. But it wasn’t.”
Heh!
You keep right on telling yourself that so long as it makes you feel good.

Hello Nick,
What do you think will happen to global temperature (using UAH Lower Tropospheric measurements) in the next few years?
I expect the current El Nino warm spike will be largely reversed back to about 0.2C by end 2017 (or early 2018). I expect that natural cooling trend to continue, similar to the global cooling experienced from ~1940 to 1975 (or colder).
I suggest this cooling will demonstrate, like the natural cooling trend from 1940 to 1975, that climate sensitivity to atmospheric CO2 is less than 1C and probably much less.
I hope to be wrong about this global cooling prediction, because a warmer world is a gentler world, and a colder world leads to more human suffering and increased winter mortality.
Regards to all, Allan

Allan,
Well, the first thing I expect is that TLT measures, like UAH’s, will be phased out. They are too unreliable. I see that John Christy is already mostly quoting (eg to the Senate) TMT (mid-trop). The new RSS so far has TMT only. NOAA STAR produces TMT, but no TLT.
I see temperatures in the conventional way as being the sum of a forced trend and natural variation, including ENSO etc. So when we get a succession of La Nina years (which we can’t currently predict longterm), there will be a slowdown. But mostly rising.
TMT will rise more slowly. In GHE terms, heat flow is impeded there from below, and there is less radiative forcing from above. An extension of that is why stratospheric cooling is expected, and in fact it is hard to separate TMT from TLS.

kim

We don’t know the cause for the recovery from the LIA; we don’t know when that cause will cease.
=========

Thank you Nick for reply.
I say it will become net colder post-end-2017, and you say it will become net warmer – Is that correct?
I hope you are right.
Within reasonable limits, warmer is good, but colder is harmful to humanity.
Regards, Allan

Chris

Allan said” “Within reasonable limits, warmer is good, but colder is harmful to humanity.”
No, that is not true. Well over half the world’s population lives in the tropics, between +30 and -30 latitude.Those regions already are suffering from extremes of heat.

bit chilly

chris ,could you provide some evidence for that ?

Sorry Chris but I’m pretty sure you are wrong.
Here is the evidence:
“Cold Weather Kills 20 Times as Many People as Hot Weather”
September 4, 2015
By Joseph D’Aleo and Allan MacRae
https://friendsofsciencecalgary.files.wordpress.com/2015/09/cold-weather-kills-macrae-daleo-4sept2015-final.pdf
[excerpts]
Cold weather kills. Throughout history and in modern times, many more people succumb to cold exposure than to hot weather, as evidenced in a wide range of cold and warm climates.
Evidence is provided from a study of 74 million deaths in thirteen cold and warm countries including Thailand and Brazil, and studies of the United Kingdom, Europe, the USA, Australia and Canada.
Contrary to popular belief, Earth is colder-than-optimum for human survival. A warmer world, such as was experienced during the Roman Warm Period and the Medieval Warm Period, is expected to lower winter deaths and a colder world like the Little Ice Age will increase winter mortality, absent adaptive measures. These conclusions have been known for many decades, based on national mortality statistics.

Canada has lower Excess Winter Mortality Rates than the USA (~100,000 Excess Winter Deaths per year) and much lower than the UK (up to ~50,000 Excess Winter Deaths per year). This is attributed to our better adaptation to cold weather, including better home insulation and home heating systems, and much lower energy costs than the UK, as a result of low-cost natural gas due to shale fracking and our lower implementation of inefficient and costly green energy schemes.

When misinformed politicians fool with energy systems, innocent people suffer and die.
****************

richard verney

No, that is not true. Well over half the world’s population lives in the tropics, between +30 and -30 latitude.Those regions already are suffering from extremes of heat.

Are they really?
It is notable that man came from this area has been around for circa 4 million years, and modern man for circa 200,0000 and yet almost all major human advance has come in the Holocene, with the majority post the Holocene Optimum.
Everything we know about life on planet Earth, the development of man suggests that warm is good and cold is bad and it would be a god send for the planet to be about 3 to 5 degrees warmer than it is today. A return to the Holocene Optimum is likely to be a very good thing. Unfortunately, that appears unlikely and the trend (Holocene Optimum, Minoan Warm Period, Roman Warm Period, MWP with each peak being slightly less than the forerunner) appears downwards back into the throes of the ice age from when this inter-glacial came.

Chris

Bit Chilly, here’s the paper: http://www.academia.edu/2289822/The_world_by_latitudes_A_global_analysis_of_human_population_development_level_and_environment_across_the_north-south_axis_over_the_past_half_century
I had to register for it, so to save you the trouble, as of 2005, 40.8% of the world’s population lived at 30N or higher northerly latitudes. 46.7% lived between the equator and 30N. 11.2% live between the equator and 30S. So a total of 56.9% live between -30 and +30. The data cited is from 2005, given the low birth rates in the 30N+ countries – especially Russia but also many European countries – the imbalance will only grow over time. I am sure 30N is now below 40% and declining. Africa has high birth rates, as do parts of Asia (Philippines & Indonesia in particular).

Only an idiot would claim that cold is good, and warmth is bad.
This comment is for the benefit of Chris, who actually believes that nonsense.

AndyG55

Just keep this in mind , Nick.
If you want to pretend that the current El Nino alters the 18+ year zero trend, then you are going to look almost as stupid as Michael Mann (who blamed it on CO2) once the following La Nina cancels it out.
Think ahead to what small reputation you have left.

“If you want to pretend that the current El Nino alters the 18+ year zero trend”
No pretence. It does. That is a matter of simple arithmetic. I’ll quote Werner:
“As mentioned in my January post, there is now no period of time going back from February 2016 where the slope is negative for any period worth mentioning on any of the five data sets I am analyzing.”
The pause, in RSS etc, was mainly due to the 1998 pulse, and to a lesser extent the warm years of 2001-5. This ENSO will at least neutralise the 1998 pulse. Of course, it may crate a new series of pauses starting in 2015.

kim

At repause, greater length.
=========

tetris

This is descending into a yes-no contest with near-missed ad homs.
To Nick S and others: only two things are important:
1] that we an increasing body of data that incontrovertibly shows the 170+ climate models held up as evidence, not so much of warming but N.B. of man-made warming, to be off-the-charts wrong and;
2] that it’s precisely those models -which wouldn’t last more than a couple of minutes in a normal due diligence environment- that are used by the green-infested political establishment to keep climate alarmism alive, brain wash the plebs, and provide the “justification” for obscenely expensive energy policies that are wrecking our economies [anyone who still doubts that, have a look at what’s unfolding in Germany and the UK].
Diverting the discussion to anything else – having intercourse with flies over this, that or the other- is tantamount to playing the ball into the long grass straw man.
Science 101: if the ensemble of best available, verifiable, data does not support the model [hypothesis], the model [hypothesis], not the data, is wrong. The rest is deliberate hand waving.

I don’t know which planet Nick inhabits (a warming one, apparently) but I heard (many times) the claim by warmists about pauses and their supposed non-significance way before I had any interest in global warming theory. One can say, for certain, that human generated carbon emissions are not coming close to
having the effect prophesized by the warming crowd. One can also see the utter transparency of their
behavior – one warm summer and suddenly global warming will shortly doom the planet, but 20 years of
mostly nothing is met with total silence and silly claims about the utter insignificance of the tiny yearly
increase (“Hottest year on record [short record]”).
BUT, as implausible as the warmist crowd is about a future of much higer CO2 concentrations, they are
put forth an even more implausible solution in terms o pathetic alternative energy sources that have no possible hope of providing the reliable energy the planet requires at a reasonable price. Any semi-energy literate folk can see that molten salt reactors can easily replace FF’s at even .lower rates and greater safety, etc etc and burn upour nuclear waste problems in the process. To me, THAT warmist-alternative-energy-source-claims are indications of the sheer, utter stupidity of the warmist crowd. One can easily argue in favor of molten salt reactors without ever mentioning their ability to reduce carbon emissions,
a side benefit to their superior cost, safety, and superior renewability over wind/solar/hydro. We also
must wonder what these warmists think will happen if those potent carbon levels in fact are reduced significantly as a result of moving away from FFs. SInce they believe carbon levels so important, they must believe that a lowering (and continued lowering) of carbon levels will doom the planet to an even greater extent than high carbon levels. So what do these shallow-thinking warmists have as a Plan B? (Hint : they haven’t even considered the need for a Plan B). Warmists are, as we can see, stupid all over.

Mark

Getting ahead of yourself Nick, El Nino hardly ends a natural variability leveling off. The downturn is still possible because the looking at the past has 0 predictive power

Gloateus Maximus

You’ve never seen Trenverth, Jones or Santer, et al, on the Plateau quoted here? I have.
Nature has already run an experiment on the temperature effect of monotonously rising CO2 for 32 years, 1945 to 1977. The conclusion is that it’s insignificant because the world cooled dramatically during those postwar decades, despite rapidly rising CO2. So dramatically in fact that climate scientists feared a return to glacial conditions, including some now touting CAGW alarmism.
The catastrophic cooling fears were squelched by the flip in the PDO of 1977. That oscillation wasn’t discovered for another 20 years, and by a PNW fisheries guy, not a climatologist.
CACA was born falsified. Or should I say hatched? Pray for a President Cruz to shut down this multi-trillion dollar, murderous organized criminal computerized activity.

Gloateus Maximus

Nick,
How did you miss this one, for instance?
http://wattsupwiththat.com/2013/05/22/kevin-trenberth-struggles-mightily-to-explain-the-lack-of-global-warming/
The whole corrupt enterprise is a travesty of science.

Bindidon

Even before having read your comment down till end: I already know you will refute the incredible amounts of aerosols produced before, during and after WW II as the origin of that 1945-1977 cooling period.
I’m sure 🙂

Gloateus Maximus

Bindion,
Thanks for yet another laugh!
You seriously imagine that WWII particulates influenced climate until 1977, then magically stopped doing so? Did you get that howler from the Potsdam Gestapo, too?
Explain then please the similar warm and cool cycles during the Modern Warm Period and prior cool and warm cycles of cold and hot periods. Did the Crimean War, US Civil War and Franco-Prussian Wars cause the end of the LIA? Or was it the Carrington Event?
As mentioned, the recovery from the depths of the Maunder Minimum in the early 18th century was a far more impressive warm cycle than the late 20th century warming. It was followed by a cooler phase, but not as cold as during the Maunder, then by a less pronounced warming, followed by the Dalton Minimum cooling, aided I’ll grant for a year by Tambora. This was followed by a mild warming, which led into the Modern Warm Period at mid-19th century.
The Modern Warm Period had its initial warm phase, followed by a cold cycle from roughly the 1880s to 1910s. Maybe WWI ended that somehow. Then there was the powerful warming of the 1920s to ’40s, followed by the natural cooling cycle until the late 1970s, followed by the natural warming until the late ’90s or ’00s. Clear skies from anti-pollution efforts contributed more to the late 20th century warming than did man-made CO2.
It’s obvious to all that you’re a CACA troll pretending not to be a Warmunista.

Mark BLR

“I’ve never seen what they actually said properly quoted here.”
Just out of curiosity, would it be possible for you to write a summary of what YOU would consider to be “properly” quoting what they actually said on this subject ?

“Just out of curiosity, would it be possible for you to write a summary of what YOU would consider to be “properly” quoting what they actually said on this subject”
The first requirement is to quote what they actually said. Aphan has given various bits below. See if you can find where someone
“said that natural variability was too small to cancel the warming of CO2 for more than a period of 10 years…er 15…er 17”
Then you have to say what they were actually talking about. Was it surface? Troposphere? And importantly, were they talking about data that had been filtered in some way, as in ENSO-corrected? Agin you might like to look through Aphan’s material to see what Jones was actually talking about.
And if you want to advance an argument about it, you need to match them up. It’s no use saying “Look, 18 years of RSS” and “NOAA said 15 years”. WAs NOAA talking about RSS or some other situation (and again, ENSO-corrected?). And what did NOAA actually say would be the implication of 15 years?

Mark BLR

Separate Point 1 (to Nick Stokes)
I have no idea who you had in mind when you wrote the word “they” originally (on April 7, 2016 at 3:34 am) !
The extract you chose from the article included “… the warmist community (Jones, Santer, etc) …”.
You reacted with : “I’ve never seen what they actually said properly quoted here”.
When you wrote “they” who, exactly, were YOU thinking of :
1) Jones ?
2) Santer ?
3) “etc” ?
4) Some combination of the above ?
What specific example(s) did you have in mind when you wrote your original comment, and what would YOU consider to be a “proper” summary of “what they actually said” in that case (/ those cases) ?

Mark BLR

Separate Point 2 (to Nick Stokes)
As with many issues in the climate change “debate”, there is a real problem with different people using the exact same phrase to mean completely different things.
In this particular case, the phrase is “statistically significant trend”, which can be used to mean EITHER
A) I am 95% confident that there IS a trend, OR
B) I am 95% confident that there is NO trend (/ that the trend is ZERO)
For case A, one of the criteria often used is that the “(95%) Confidence Interval” does NOT include the zero value (or something equivalent, e.g. “the 2.5% and 97.5% trend value limits have the same sign”).
Note that using this criterion means that you can NEVER declare a zero trend, i.e. case B, as “statistically significant” !

See if you can find where someone “said that natural variability was too small to cancel the warming of CO2 for more than a period of 10 years…er 15…er 17”

Other people on this thread have noted that the “15 years” comes from Phil Jones’ “Climategate” E-mail from 2009 (talking about surface datasets), while the “17 years” comes from Santer et al (2011, talking about lower troposphere datasets).
The “10 years” comes from the period just before AR4, 2005/6/7, when people on (mostly) “sceptic” blogs noted that surface temperatures since 2001/2 had flattened remarkably, and the typical response was something along the lines of : “Much too short a timescale, what’s important are the decadal trends“.
The 10, 15 and 17 year numbers come from 3 different sources, no ONE person “said that natural variability was too small to cancel the warming of CO2 for more than a period of 10 years…er 15…er 17”.
The “real” question being asked is : “How long does a zero trend in a given temperature dataset need to be in order to be ‘statistically significant’ ?”, because that would falsify the (C)AGW hypothesis and/or the climate models.
In 2005/6/7 the answer (from anonymous Internet posters …) was “decadal trends”, i.e. 10 years.
In 2009 Phil Jones said climate scientists only needed to “get worried” if a (surface) zero trend of 15 years occurred.
In 2011 Santer (et al) said “that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature”, which was widely interpreted as saying that if a zero trend in the satellite data occurred lasting 17 (or more) years then it COULD BE argued that “human effects = ZERO” was actually a scientifically viable point of view.
NB : The current “consensus” is that tropospheric datasets have wider error bars than surface temperature datasets, which would imply that the length of records “required for identifying human effects on global-mean surface temperature” is less than 17 years.

“which was widely interpreted as saying”
by people who aren’t very good at interpreting. But the thing is, he didn’t say it. You need to quote what he said, not what people like to interpret.
“Phil Jones said climate scientists only needed to “get worried” if a (surface) zero trend of 15 years occurred.”
This was actually just some chat from an email to a colleague. A fuller quote goes
“Bottom line – the no upward trend has to continue for a total of 15 years before we get worried. We’re really counting this from about 2004/5 and not 1998. 1998 was warm due to the El Nino.”
They never tell you that bit, do they? So he’s excluding El Nino years, and the downtrend that follows. And of course, the fact is that surface sets did not have a 15 year no upward trend anyway. In fact, he’s talking about the upcoming NOAA statement, which was about ENSO-adjusted data.
“The “real” question being asked is : “How long does a zero trend in a given temperature dataset need to be in order to be ‘statistically significant’ ?”, because that would falsify the (C)AGW hypothesis and/or the climate models.”
No, statistically significant never tells you anything like that. It just tells you that there is something worth investigating. You may find, for example, that it was due to an ENSO peak, and ENSO was not included in your stochastic model.

Aphan

http://www.bbc.com/news/science-environment-13719510
“The trend over the period 1995-2009 was significant at the 90% level, but wasn’t significant at the standard 95% level that people use,” Professor Jones told BBC News.
“Basically what’s changed is one more year [of data]. That period 1995-2009 was just 15 years – and because of the uncertainty in estimating trends over short periods, an extra year has made that trend significant at the 95% level which is the traditional threshold that statisticians have used for many years.
“It just shows the difficulty of achieving significance with a short time series, and that’s why longer series – 20 or 30 years – would be a much better way of estimating trends and getting significance on a consistent basis.”
So Jones thinks that the longer a pause remains, the greater significance it has. He said that the one year from 15 years to 16 years increased the significance of that trend by 5%=95%. How significant then is the trend if it’s lasted another 6 years (from his 2011 statement above)?=22 years at 120%??
http://onlinelibrary.wiley.com/doi/10.1029/2011JD016263/abstract
Santer et al 2011-
“Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.”
We can argue all day about what defines “making a big deal” about something, or whether or not we have “conclusive evidence that the models either:
a) grossly under estimated natural variability or
b) grossly over estimated CO2 sensitivity or
c) both”
Those are merely davidmhoffer’s personal opinions, but I find them both to be reasonable, logical opinions based upon the existing statements to the press and others that were made by “experts” prior. This one is not reasonable however-
“Actually, the pause, insofar as it existed, was only significant in that it might have been a prelude to a downturn. ”
NONE of the “experts” I know of added the qualifier that the “pause” would only be significant if it preceded a temperature downturn. NONE of them. They were all insistent, as were the models, that the LENGTH of the pause determined whether or not it was significant…and Phil Jones determined that it became significant at the 95% level in 2011.
Unless you can provide evidence to support YOUR OPINION here regarding the significance being based upon a following downturn-I find your opinion unreasonable and illogical

“They were all insistent, as were the models, that the LENGTH of the pause determined whether or not it was significant…and Phil Jones determined that it became significant at the 95% level in 2011.”
No. Jones said that the positive trend became significant in 2011. That is, significantly different from zero. An anti-pause. Look at the headline
“Global warming since 1995 ‘now significant'”

Gloateus Maximus

And Jones’ change of mind (I won’t speculate as to what prompted it, since it clearly wasn’t statistical analysis) was properly quoted here, which you claim never to have seen:
http://wattsupwiththat.com/2011/06/11/phil-jones-does-an-about-face-on-statistically-significant-warming/

And Jones’ change of mind (I won’t speculate as to what prompted it, since it clearly wasn’t statistical analysis)

I would not call it a change of mind. The answer is actually very simple. 2010 was an El Nino year. And since 1995 to 2009 was almost significant, then the El Nino of 2010 simply pushed it over the 95% mark. And it WAS statistical analysis that allowed Jones to say what he did.

“And Jones’ change of mind (I won’t speculate as to what prompted it, since it clearly wasn’t statistical analysis) was properly quoted here”
The headline and intro of the WUWT post has it all wrong. But the text does quote it properly:
“But another year of data has pushed the trend past the threshold usually used to assess whether trends are “real”. Dr Jones says this shows the importance of using longer records for analysis.”
As Werner says, it is simple and proper statistics, not a “change of mind”. Another year of data increases the confidence, and takes it past the designated level of 95%.

Gloateus Maximus

Werner and Nick,
If so, then he should have changed his opinion back again when 2010 was followed by cool years. But he didn’t.
Thus his change of mind was purely politically motivated. My speculation, but what other reason could there be for his not altering his conclusion again in subsequent years, as, based upon statistical analysis, the “Pause” continued in force?

Gloateus Maximus

Nick,
Please elaborate on why you imagine that the post has it all wrong.
Thanks.

If so, then he should have changed his opinion back again when 2010 was followed by cool years. But he didn’t.

I do not know whether he did or did not. Nor do I know if he was even asked about it later. Another thing that I do not know is whether 1995 to 2011 even had statistically significant warming due to changes to data sets over time. Hadcrut3 has been replaced by Hadcrut4 and Hadcrut4 has had at least 3 adjustments over the years.

Gloateus Maximus

Werner,
HadCRU is as much a work of science fiction as GISS or NOAA. As you may very well know.
Jones may or may not have provided annual updates after 2010, but it doesn’t matter. If he changed his mind back, as statistical analysis would have required, then good for him. If he didn’t, then my worst suspicions are confirmed.

GM,
“Please elaborate on why you imagine that the post has it all wrong.”
The post was headed:
“Phil Jones does an about face on “statistically significant” warming”.
and says
“From the “make up your mind” department:”,/i>
Werner has properly explained how there is no “make up your mind” issue. Stat significance is just an arithmetic calc, and depends on both the trend and the amount of data you have. From the trend viewer, you can create this plot:
http://www.moyhu.org.s3.amazonaws.com/2016/4/had1995.png
It shows trends from start year on the y axis to end year on the left. But trends that are not significantly above 0 are paled out (similar criterion as Werner is using). I’ve marked a black line that shows trends of periods starting in 1995. You’ll notice that at any level except the most recent, as you go forward in end time (right), the trends eventually become significant.
I’m showing HADCRUT 4, so it doesn’t give the same as HADCRUT 3, which Jones was using in 2011. But as you follow the line right, it first becomes significant about 1998. That is the effect of the Nino pulse. The trend gets so large that it is significantly above zero even with short data period. But then it goes down again, and as more months are in, becomes significant about 2002. This is always going to happen at some time.
Jones was asked in 2010 about trend since 1995 because that was the longest you could go without significance. It was on the verge. The boundary of that pale region. When they asked again in 2011, there was now enough data to tip over into significance. It crossed the boundary.

As a follow-up, I see that I have a version of HADCRUT 3 from late 2011 here. So we can see exactly what the situation Jones was calculating is. Or almost exactly – Jones was probably using annual data, similar but not identical. Jones said:
“Last year’s analysis, which went to 2009, did not reach this threshold; but adding data for 2010 takes it over the line.”
So here is the picture. Again trends starting in 1995 are shown in black. Trends ending in 2009 are shown in blue, and in 2010 in red. So you can see looking down the blue line why he was asked about 1995 in 2010. It’s the last year in the pale insignificant region. And so Jones said, when asked, no, the trend since 1995 is not significant. That’s where the blue and black intersect.
But next year, on the red line, it’s in the significant area. This isn’t Jones not making up his mind. It’s reality. He just calculates it. To add to the fun, a year later he may well have had to say it was insignificant again. If you push up against the borders of anything, this is what you get.
http://www.moyhu.org.s3.amazonaws.com/2016/4/had2011.png

Chris

To Nick Stokes: ” The swimming pool is a good analogy. Imagine adding 400 ppm of ink. Then you can’t see the bottom. In the IR range, in the air, CO2 is ink. And radiant heat needs a clear view to emerge. Otherwise less efficient modes of heat transfer are used. ”
CO2 contains LESS specific heat than most atmospheric air mix. So you struck out, at the middle grade school level there.
More CO2 means emissions at a LOWER temperature than the standard atmosphere. Air with more CO2 can hold LESS energy before it emits energy.

David A

Nick says,
==================
“…Actually, the pause, insofar as it existed, was only significant in that it might have been a prelude to a downturn. But it wasn’t.
==================
Two things. First it was very significant as it was not in any of the IPCC models, and demonstrates that those same models over predict by two to three hundred percent the warming that has occurred. Yet in your mind this is not significant!
Additionally the satellite warming happened due to warm ocean surfaces; a positive PDO, AMO, strong El Nino, and the Pacific Blob. All appear to be reversing now in sync, while the cooling of the SH oceans continue. If the trend continues a downturn is likely!

Bindidon

Thanks Werner Brozek for this material.
Well, I’m no warmist at all, but some statements I do not fully understand, e.g. “There is no statistically significant warming since…”.
1. Moyhu aka Nick Stokes, for example, shows for RSS3.3 the following trends till Feb 2016 when starting from:
– Aug 1993: 0.825 °C / century
– Apr 1993: 0.904 °C ”
– Dec 1992: 1.023 °C ”
Kevin Cowtan’s trend computer (http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html) produces very similar results.
As does within Excel the simple linest function – all data baselined wrt UAH (1981-2010):
– Jan 1993: 0.112 ± 0.018 °C / decade.
2. For HadCRUT4 since Jan 2001:
– 1.032 °C / century @ moyu;
– 0.103 ±0.139 °C / decade @ Cowtan
– 0.103 0.021 °C / decade @ Excel’s linest
Etc etc.
So my question: what do you mean with statistically significant?

Bindidon,
LINEST gives the OLS trend uncertainties, assuming the random variation is white noise. That gives much lower uncertainty. Most serious models of monthly temperature allow for autocorrelation. Hot/cold months tend to come in runs. That means larger trend variation is likely.
But your question of what is meant by statistically significant is a good one. A statistical model of observations is created consisting of a trend with random variations, with a fitted distribution. It is loosely said that there is x chance that the trend could have been zero, but strictly what the test says is that, if the model were replaced by one with zero trend but same fluctuation distribution, the chance of the observed trend or greater would be x.
But the key thing is that it is supposed that the distribution (of fluctuations) could be resampled. That means, basically, rerunning the weather. It doesn’t mean we are uncertain (to that extent) of the trend that happened. We are uncertain of what the trend might have been if we could do it all again ( which of course we can’t, except in models).

Bindidon

Understood, many thanks. I have seen that Kevin Cowtan’s standard deviation is much higher than that of linest.

Put very, very simply, in statistics, you are looking at the “odds”, the probability of something (an event, or relationship, or correlation between things) happening, occurring, existing.
Statistically insignificant results are things that have low probability of occurring. If the results are statistically insignificant, it means that there is just as much, or more of, a chance that you got your results by sheer chance, flukes, coincidence or data error, than because they are accurate, real, and dependable.
Statistically significant results are things that have high odds ..It means that the odds of that thing happening or existing by mere chance/fluke/error/coincidence are LOW. It means that you are at least 95% confident that your results are real, accurate, and repeatable.
Like the quote I supplied from Jones above, the longer the time series is, the more data you have, the bigger your sample is…the more confident you become in the “significance” of your results. Jones suggests that the “traditional threshold that statisticians have used for many years”, between a statistically insignificant trend and a statistically significant one is 95% confidence. And to Jones, the “pause” became a statistically significant trend in 2011.
BUT-
“Statistical significance does not mean practical significance.
The word “significance” in everyday usage connotes consequence and noteworthiness.
Just because you get a low p-value and conclude a difference is statistically significant, doesn’t mean the difference will automatically be important. It’s an unfortunate consequence of the words Sir Ronald Fisher used when describing the method of statistical testing.”
(http://www.measuringu.com/blog/statistically-significant.php)
So something can be “statistically significant” but completely irrelevant or unimportant in any other way outside of statistics.
To clarify- the phrase “statistically significant warming for the past X years”, means that (only) statistically speaking, there is at least a 10% chance, (could be even greater) that any warming trend calculated could merely be the result of data error or faulty algorithms or processing methods. It means that it cannot be said…statistically… with at least 95% confidence… that there is a real, accurate, provable trend.

Bellman

Aphan:

Statistically insignificant results are things that have low probability of occurring. If the results are statistically insignificant, it means that there is just as much, or more of, a chance that you got your results by sheer chance, flukes, coincidence or data error, than because they are accurate, real, and dependable.

This is completely wrong. Statistically insignificant results are results that have not passed an arbitrary level of significance, say 5%. If a result is insignificant at the 5% level, it means that if whatever you were trying to measure had no effect then there was at least a 5% chance that you would have got the same result or better.
It does not mean there is just as much likelihood you got the results by chance – there might be only a slim probability (e.g. 1 in 20) that if the results were happening by chance you would have seen those results.
Moreover you cannot say what the likelihood that the results were obtained by chance by looking at the significance, though that’s a common misunderstanding. It’s an important distinction between the probability of getting these results assuming no-effect and the probability of no-effect given these results.

And to Jones, the “pause” became a statistically significant trend in 2011.

Jones did not say that. It’s meaningless.

I said your graph was a “nonsense” trend graph for the exact same reason you said the one from WFT was one…you failed to note the degree of “uncertainties” in the data you used, on your chart-and published it anyway. You gave observers no more reason to have confidence in your graph than they should have in the WFT graph. You also don’t indicate reference points on any axis for the “temperature” trend you included on the graph, or indicate where you got your “CO2 concentration delta” data from either. You add (after the fact) but not on the graph itself:
“The concentration’s ln has been scaled here as well, by a factor of 10 (JMA’s anomalies were by 15).
Only a math & physics specialist anyway would be able to scale all that stuff correctly, it’s just for optics here.
Moreover, we should keep in mind that Arrhenius’ ln formula in fact gives as result Watt/m², and not K or °C anomalies! The two do not correlate per definitionem”.
So basically-you are not a math or physics specialist, your graph is not scaled correctly, it’s pure optics, and your Arrhenius In formula doesn’t really correlate well with basic temperature anomalies anyway.
It that doesn’t render your chart not much more than useless (which seems to be your definition of nonsense) I must have really missed something.
The argument that rising concentrations of CO2 in the atmosphere will cause global temperatures to rise dangerously is a CAGW argument, not that of skeptics. Skeptics tend to disagree with that, and while it’s boring as crap to talk about, the fact that rising CO2 has not correlated with rising temps in the manner predicted is the most obvious flaw in the AGW argument (they set the standards on that one…not us) and so it obviously comes up a lot.

We can always wait until this time next year and see where temperatures are and whether a La Niña develops over the coming year. It is not easy to measure a temperature trend when your end-point is the peak of a Super El Niño.

It is not easy to measure a temperature trend when your end-point is the peak of a Super El Niño.

That is very true! So how is this attempt?
I feel that a comparison for January to March of 1998 versus 2016 is more meaningful for UAH.
The January to March average in 1998 was 0.536.
The January to March average in 2016 was 0.702.
This is a difference of 0.166 C over 18 years or 0.922 C per century.

Three months of weather during an El Nino periodic anomaly compared to three months of weather during a different El Nino periodic anomaly?
How scientific!
The most amazing thing is how little shame such comparisons bring.

David A

W. B. says…
==================================
“I feel that a comparison for January to March of 1998 versus 2016 is more meaningful for UAH.
The January to March average in 1998 was 0.536.”
The January to March average in 2016 was 0.702.
This is a difference of 0.166 C over 18 years or 0.922 C per century.
=====================================
First a question. Was January to March 1998 the warmest three months of the 98 El Nino?
Let us see what happens as the 2015-16 event also had the Pacific Blob. If the AMO turns down, and we get a strong La Nina, and the blob continues to dissipate, and the SH oceans continue to cool (all this appears to be happening) we may well end up near the 1979 satellite GMT.
My best guess is that we will have about .166 degrees over 37 years at that point. IMV, the oceans (with 1000 times the energy of the atmosphere) wag the atmospheres tail, and TOA changes control input into the oceans, and solar spectrum changes in solar cycles control TOA changes. http://joannenova.com.au/2015/01/is-the-sun-driving-ozone-and-changing-the-climate/

First a question. Was January to March 1998 the warmest three months of the 98 El Nino?

No, since April 1998 was a record that lasted until February 2016. Unfortunately, we do not have April 2016 yet, but when we do, a comparison of January to April would be much more meaningful. In the meantime, I do the best with what we have.
On a different post, all of 2015 was compared with 1997, but I do not agree with that since 2015 had higher ENSO numbers.
Below is an earlier comment I made on April 1:

The actual figure for the average difference for the comparable 15 months is just under +0.25C. This corresponds to 1.38 degrees per century.

I do not believe that is a fair comparison since apparently all of 2015 at one time would have been an official El Nino, but missed out when one of the numbers was downgraded from a 0.5 to 0.4. See:
http://www.cpc.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml
However using the numbers in the above site, the average for 1997 was 1.04, but the average for 2015 was 1.26.
As a result, I feel that a comparison for January to March of 1998 versus 2016 is more meaningful.
The January to March average in 1998 was 0.536.
The January to March average in 2016 was 0.702.
This is a difference of 0.166 C over 18 years or 0.922 C per century.

Bindidon

Sometimes I really don’t understand why some comments simply are refused… and if you forget to save them before, 20 minutes of patient work simply get lost…

Depends on the dormancy rates, of the site and other programs. Basically, you’re typing on your computer keyboard, but the connection between your computer and the site “dies” or terminates due to no actual “online” activity occurring between them. Kind of like how your ipad goes into hibernation after a certain period of time…your link to the site can do the same thing even if your screen is active on YOUR end of things.
If your link died before you hit send, and you didn’t “save” your comment to anything, it just disappears into the vapor. Nature of the beast. If you put great effort into something or just want to be sure, highlight everything you types into the box and click “copy” before you hit send. Then you can just paste it back in if it’s gone. Or like Johann said, type your comments in Word or Notepad or something and then just copy and paste them in here.

Bartemis

On this site, I often have comments vanish when I click “post”, only to appear some minutes later.

Bindidon

No Aphan: the problem was completely different, I just understood what it is due to.
It seems to me that as opposed to all sites I sometimes publish little comments (Roy Spencer, Judith Curry, fench newspapers, …) this site http://wattsupwiththat.com keeps track of the IPs you use to communicate.
When you publish from 2 corners (work, private) you have 2 different dynamic IP addresses, possibly even changing over day or night.
And that’s the reason my comments were not published immediately.

Hivemind

Been there, done that, got the T-shirt. Sometimes I think that is literally it. If you take too long to comment, that seems to be when your comment isn’t saved.

Johann Wundersamer

Type and save your comment in Word. ; copy paste from there to comment space

kim

Brief commenting is a race against time, and I lose too.
=============

Dr. S. Jeevananda Reddy

The natural variability with or without adjustment of raw data, has a 60-year cycle [even simple moving average analysis showed 60-year cycle is valid] vary between – 0.3 to +0.3 oC, sine curve. Through extrapolation, this can be shown on the 23 years data series — it need not be more than 10 years.
In 1880, the value 0.0 oC. By 2000 two cycles are completed. From 2001 onwards it is on the rising side reaching 0.3 oC by around 2015 and there onwards it comes down to 0.0 oC by 2030. By 2045 it reaches minimum of -0.3 oC and by 2060 it again reaches to 0.0 oC.
Dr. S. Jeevananda Reddy

Dr. S. Jeevananda Reddy

The natural variability with or without adjustment of raw data, has a 60-year cycle [even simple moving average analysis showed 60-year cycle is valid] vary between – 0.3 to +0.3 oC, sine curve. Through extrapolation, this can be shown on the 23 years data series — it need not be more than 10 years.
In 1880, the value 0.0 oC. By 2000 two cycles are completed. From 2001 onwards it is on the rising side reaching 0.3 oC by around 2015 and there onwards it comes down to 0.0 oC by 2030. By 2045 it reaches minimum of -0.3 oC and by 2060 it again reaches to 0.0 oC.

And now, the 4 trillion dollar question: Does today’s 2000-2015 “peak” of the 60 year natural short cycle mean we have reached the “peak” of the longer 900-1000 warming cycle? Does the Modern Warming Period – irritatingly abbreviated the same as the Medieval Warming Period – peak now (2000-2010) or does it peak one short cycle later in 2060, or two short cycles later in 2120-2130?
When does the Modern/Modest Ice Age show up? 2400 AD? 2460?

Gloateus Maximus

Half of an average Bond Cycle is about 735 years. Thus the warm period beginning around 1850 should last until 2585.
In the year 2525, if Man is still alive…
But could also end a lot sooner. The Medieval WP lasted only at most from AD 800 to 1400 and probably less, i.e. 950 to 1350.

Gloateus,the warming actually started in the late 1600’s.

Gloateus Maximus

Tommy,
The depths of the LIA were in the coldest part of the Maunder, ie the 1690s, as you suggest, but there was a last blast in the first decade of the 18th century. The horrific Great Frost of 1708/9 was the end of the trough of the LIA.
There has probably not been anything like that winter since and rarely before in the Holocene. Even royalty suffered. History was affected by the testicles of Swedish soldiers actually freezing off in the Ukraine, leading to the disaster of Poltava and rise of Peter the Great and Russia. Not to mention the poor peasants and birds of the air and beasts of the fields.

Notanist

>…The “Pause” … only became significant because the warmist community (Jones, Santer, etc) said that natural variability was too small to cancel the warming of CO2 for more than a period of 10 years…er 15…er 17 and made a big deal out of it….
We have to rely on “because they said” only because there was no pre-1988 peer-reviewed scientific paper that described the mechanism/equations for CO2 controling earth’s temperature to justify Al Gore’s congressional stunt that led to the IPCC and all of the rest of it.
All statistics done since then are post hoc dredging for correlations. Future history books are so going to mock…

seaice1

“there was no pre-1988 peer-reviewed scientific paper that described the mechanism/equations for CO2 controling earth’s temperature”
Now, Arrhenius was around before peer review was the norm, but he did exactly what you describe. I find it hard to believe that there were no papers describing this before 1988.
Indeed, a quick check shows there were indeed peer reviewed papers calculating global warming with CO2 before 1988. In 1956 Gilbert Plass calculated that doubling the level of CO2 would lead to a 3-4C rise in global temperatures. However, before 1970 there was no clear consensus that human produced CO2 could affect the climate. It was during the 1970’s that the field proliferated.

seaice1 says:
In 1956 Gilbert Plass calculated that doubling the level of CO2 would lead to a 3-4C rise in global temperatures.
That’s the kind of repeatedly falsified nonsense that ‘seaice1’ and the rest of the alarmist crowd relies on. Also, Arrhenius recanted his original hypothesis, and wrote a later paper that hypothesizes that 2xCO2 would result in well below 2ºC of warming. They always conveniently forget to mention that part. And real world observations are showing even that number is probably far too high.
To paraphrase Prof. Richard Feynman, if your theory is contradicted by observations, it’s WRONG. That’s all there is to it.
Observations show conclusively that the rise in CO2 is not causing a 3 – 4ºC rise in global T. In fact, observations show that there is no difference between the natural rise in global T before or after the 1940’s – 1950’s time frame, when industrial CO2 emissions really began to rise.
Therefore, the CO2-cAGW conjecture is falsified. It is wrong. But instead of doing what science demands — defenestrating that failed conjecture, then trying to figure out why its “calculations” were totally wrong, and then trying to come up with a new hypothesis that actually works — the climate alarmist club digs in its collective heels and tries to ‘explain’ why empirical obervations are wrong.
The way they do this now is to fabricate what they laughingly refer to as “data”, and try to convince the public that a climate catastrophe is still in the works.
That’s called ‘fraud’, and it gets worse every day. But since the alternative is to admit that skeptics of their falsified conjecture were right all along, and since big money is involved, they’ve made the decision to sell their souls.
Because they just cannot stand the thought of admitting that skeptics, whom they hate and fear, were right to be skeptical of the claims that the rise in CO2 by only one part in 10,000 will cause all the scary disasters that they’ve been predicting, teaching and preaching to each other for the past several decades. They just cannot admit that they were wrong — even though Planet Earth is busy showing everyone that their claims are ridiculous.

Something can be peer reviewed and still turn out to be utter poppycock. Peer review is not proof. It’s not validation. It’s simply “nice hypothesis, data and methods seem ok, well written, we’ll publish it so others can read it and test your results”. PERIOD. And as far as I know, Arrhenius never claimed that CO2 alone could “control earth’s temperature”. I’d love a reference if you have one.
As dbstealey notes, Arrhenius DID recant his prior estimations and brought his expectations for warming due to a doubling of CO2 down below 2C.

dbstealey:
“Also, Arrhenius recanted his original hypothesis, and wrote a later paper that hypothesizes that 2xCO2 would result in well below 2ºC of warming. They always conveniently forget to mention that part. ”
OK, in the “real” world, would you like to find and link evidence for this.
I cannot find anything but confirmation of his original ideas.
http://ponce.sdsu.edu/global_warming_science.html
http://www.rsc.org/images/Arrhenius1896_tcm18-173546.pdf
“Observations show conclusively that the rise in CO2 is not causing a 3 – 4ºC rise in global T.”
And no they don’t.
We have ha a 1C rise now for a ~40% rise with another 1C in the pipeline due thermal inertia.
No, what observations show this last 18 years is how much the PDO/ENSO cycle affects GMT.
That GCM’s missed it is no surprise as they are an amalgam of runs and so average out climate cycles and currently the most important cycle cannot be forecast anyway.
The cycle SHOULD cancel out in the long term – the fact that the cool cycle has not resulted in a GMT drop is WHY the models have shown us we are correct.
“Therefore, the CO2-cAGW conjecture is falsified”
Only down the rabbit-hole is it.
Thereafter follows typical db evidence-lacking rambling hand-waving.
The only answer to that is (apart from the obvious if one knows the science).
Is to say “if you say so” … because you are just expecting us to believe your word and not a JOT of science (peer-reviewed and in the case of the GHE – empirical) is offered in evidence.

John Finn

Toneb

We have ha a 1C rise now for a ~40% rise with another 1C in the pipeline due thermal inertia.

1. To date, we have around half the forcing expected from a doubling of CO2 (for an estimate – use Myhre et al formula, i.e. Forcing = 5.35* ln(C1/C0) where C0= initial CO2 reading in pre-industrial era ; C1= final/current CO2 reading).
2. Current forcing is actually more than half 2xCO2 forcing because of other ghg, e.g.methane, increases.
3. There isn’t 1 deg C in the pipeline. Even if we accept the claimed (not measured) 0.6 w/m2 TOA imbalance there is no way that could realise a further 1 degree of warming.

catweazle666

“In 1956 Gilbert Plass calculated that doubling the level of CO2 would lead to a 3-4C rise in global temperatures. “
And in 1971 Schneider and Rasool had this to say:
We report here on the first results of a calculation in which separate estimates were made of the effects on global temperature of large increases in the amount of CO2 and dust in the atmosphere. It is found that even an increase by a factor of 8 in the amount of CO2, which is highly unlikely in the next several thousand years, will produce an increase in the surface temperature of less than 2 deg. K.
Schneider S. & Rasool S., “Atmospheric Carbon Dioxide and Aerosols – Effects of Large Increases on Global Climate”, Science, vol.173, 9 July 1971, p.138-141
Those results were bases on a climate model developed by none other than James Hansen, incidentally.

Bartemis

Toneb –
“We have ha a 1C rise now…”
Half of which occurred naturally before CO2 concentration had risen appreciably. There is no basis to conclude that the other half was not natural.
“No, what observations show this last 18 years is how much the PDO/ENSO cycle affects GMT.”
If those cycles were able to produce the “pause”, then they also were capable of producing the rise from about 1970-2000, upon which the entire anthropogenic attribution hypothesis depends.
“…with another 1C in the pipeline due thermal inertia.”
Or, not. There is no actual evidence that there is any significant anthropogenic impact globally on surface temperatures at all.

Bartemis:
“Half of which occurred naturally before CO2 concentration had risen appreciably. There is no basis to conclude that the other half was not natural.”
And what accounted for the “natural” 0.5C?
TSI?- which has overall been falling since ~1975….
http://amptoons.com/blog/wp-content/uploads/2011/03/Solar_vs_Temp_basic.gif
“If those cycles were able to produce the “pause”, then they also were capable of producing the rise from about 1970-2000”
No, as any natural warming has been cancelled by natural cooling. It has to in the long term. SW absorbed Must equal LWIR emitted so a warming exceeding that will come down to the balance point and vice versa to match TSI.
Yes the +ve PDO enso phase until ~1975 upped GMT but since then apart from 97/98, 09/10 and 15/16 we have seen a mostly -ve Pacific.
The “natural” warming is cyclic followed by a “natural” cooling in the (mostly) -ve PDO/ENSO phase which we have had since circa 97/98.
The +ve forcing of GHG concentration only overtook the -ve forcing of aerosol in the atmosphere ~1960 and therefore total forcing did not go +ve until then.comment image
Here we see that since then (1960)….
http://woodfortrees.org/graph/gistemp/from:1960/to:2016/plot/gistemp/from:1960/to:2016/trend/plot/hadcrut4gl/from:1960/to:2016/plot/hadcrut4gl/from:1960/to:2016/trend/plot/none
That for GISS there has been ~1C warming
For Hadcrut ~0.8C warming.

Toneb-
http://www.friendsofscience.org/assets/documents/Arrhenius%201906,%20final.pdf
“In a similar way, I calculate that a reduction in the amount of CO2 by half, or a gain to twice the amount, would cause a temperature change of –1.5 degrees C, or + 1.6 degrees C, respectively. “

Bartemis

“And what accounted for the “natural” 0.5C?”
Many things could have. Natural modes of oscillation within the planetary climate system stretch for 100’s, even thousands of years. Just because you cannot pinpoint a particular root cause does not mean there isn’t one.
“It has to in the long term.”
And, that long term can be very long indeed. There is no definite limit on it.
“The +ve forcing of GHG concentration only overtook the -ve forcing of aerosol in the atmosphere ~1960…”
Sure, it did. It certainly helps that you can dial in any aerosol impact you like to make things look nice.
“Here we see that since then (1960)….”
Why stop there? Here
http://woodfortrees.org/graph/hadcrut4gl/from:1900/plot/hadcrut4gl/from:1910/to:1945/trend/plot/hadcrut4gl/from:1970/to:2005/trend
we see that there was comparable warming earlier in the century, well before CO2 could have been causing it. It is of essentially the precise same magnitude, which makes it folly to insist that the earlier warming episode and the later one had separate drivers, conveniently rendered equal by arbitrary selection of aerosol forcing.
You seem to be trying to talk yourself into a conclusion. Stop rationalizing, and start delineating what you truly know from what you are just guessing at. There is no evidence here of any appreciable CO2 sensitivity whatsoever.

“The +ve forcing of GHG concentration only overtook the -ve forcing of aerosol in the atmosphere ~1960…”
Sure, it did. It certainly helps that you can dial in any aerosol impact you like to make things look nice.
_______________________
Agreed Bart – the fraudulent use of fabricated aerosol data to force the climate models to hindcast the natural global cooling that occurred from about 1940 to 1975 is discussed here:
http://wattsupwiththat.com/2016/02/24/new-paper-shows-global-warming-hiatus-real-after-all/comment-page-1/#comment-2152998
I contest the frequent modelers’ claims that manmade aerosols cause the global cooling that occurred from ~1940 to 1975. This aerosol data was fabricated to force the climate models to hindcast the global cooling that occurred from 1940 to 1975, and is used to allow a greatly inflated model input value for climate sensitivity to atmospheric CO2 (ECS).
The climate models cited by the IPCC typically use ECS values far above 1C, which must assume strong positive feedbacks for which there is no evidence. If anything, feedbacks are negative and ECS is less than 1C. This is the key reason why the IPCC’s climate models greatly over-predict global warming, imo.
Some history on this subject follows:
http://wattsupwiththat.com/2009/06/27/new-paper-global-dimming-and-brightening-a-review/#comment-151040
Allan MacRae (03:23:07) 28/06/2009 [excerpt]
Repeating Hoyt : “In none of these studies were any long-term trends found in aerosols, although volcanic events show up quite clearly.”
___________________________
Here is an email just received from Douglas Hoyt [in 2009 – my comments in square brackets]:
It [aerosol numbers used in climate models] comes from the modelling work of Charlson where total aerosol optical depth is modeled as being proportional to industrial activity.
[For example, the 1992 paper in Science by Charlson, Hansen et al]
http://www.sciencemag.org/cgi/content/abstract/255/5043/423
or [the 2000 letter report to James Baker from Hansen and Ramaswamy]
http://74.125.95.132/search?q=cache:DjVCJ3s0PeYJ:www-nacip.ucsd.edu/Ltr-Baker.pdf+%22aerosol+optical+depth%22+time+dependence&cd=4&hl=en&ct=clnk&gl=us
where it says [para 2 of covering letter] “aerosols are not measured with an accuracy that allows determination of even the sign of annual or decadal trends of aerosol climate forcing.”
Let’s turn the question on its head and ask to see the raw measurements of atmospheric transmission that support Charlson.
Hint: There aren’t any, as the statement from the workshop above confirms.
__________________________
IN SUMMARY
There are actual measurements by Hoyt and others that show NO trends in atmospheric aerosols, but volcanic events are clearly evident.
So Charlson, Hansen et al ignored these inconvenient aerosol measurements and “cooked up” (fabricated) aerosol data that forced their climate models to better conform to the global cooling that was observed pre~1975.
Voila! Their models could hindcast (model the past) better using this fabricated aerosol data, and therefore must predict the future with accuracy. (NOT)
That is the evidence of fabrication of the aerosol data used in climate models that (falsely) predict catastrophic humanmade global warming.
And we are going to spend trillions and cripple our Western economies based on this fabrication of false data, this model cooking, this nonsense?
*************************************************
Allan MacRae
September 28, 2015 at 10:34 am
More from Doug Hoyt in 2006:
http://wattsupwiththat.com/2009/03/02/cooler-heads-at-noaa-coming-around-to-natural-variability/#comments
[excerpt]
Answer: Probably no. Please see Douglas Hoyt’s post below. He is the same D.V. Hoyt who authored/co-authored the four papers referenced below.
http://www.climateaudit.org/?p=755
Douglas Hoyt:
July 22nd, 2006 at 5:37 am
Measurements of aerosols did not begin in the 1970s. There were measurements before then, but not so well organized. However, there were a number of pyrheliometric measurements made and it is possible to extract aerosol information from them by the method described in:
Hoyt, D. V., 1979. The apparent atmospheric transmission using the pyrheliometric ratioing techniques. Appl. Optics, 18, 2530-2531.
The pyrheliometric ratioing technique is very insensitive to any changes in calibration of the instruments and very sensitive to aerosol changes.
Here are three papers using the technique:
Hoyt, D. V. and C. Frohlich, 1983. Atmospheric transmission at Davos, Switzerland, 1909-1979. Climatic Change, 5, 61-72.
Hoyt, D. V., C. P. Turner, and R. D. Evans, 1980. Trends in atmospheric transmission at three locations in the United States from 1940 to 1977. Mon. Wea. Rev., 108, 1430-1439.
Hoyt, D. V., 1979. Pyrheliometric and circumsolar sky radiation measurements by the Smithsonian Astrophysical Observatory from 1923 to 1954. Tellus, 31, 217-229.
In none of these studies were any long-term trends found in aerosols, although volcanic events show up quite clearly. There are other studies from Belgium, Ireland, and Hawaii that reach the same conclusions. It is significant that Davos shows no trend whereas the IPCC models show it in the area where the greatest changes in aerosols were occurring.
There are earlier aerosol studies by Hand and in other in Monthly Weather Review going back to the 1880s and these studies also show no trends.
So when MacRae (#321) says: “I suspect that both the climate computer models and the input assumptions are not only inadequate, but in some cases key data is completely fabricated – for example, the alleged aerosol data that forces models to show cooling from ~1940 to ~1975. Isn’t it true that there was little or no quality aerosol data collected during 1940-1975, and the modelers simply invented data to force their models to history-match; then they claimed that their models actually reproduced past climate change quite well; and then they claimed they could therefore understand climate systems well enough to confidently predict future catastrophic warming?”, he close to the truth.
________________________________________________________
Douglas Hoyt:
July 22nd, 2006 at 10:37 am
Re #328
“Are you the same D.V. Hoyt who wrote the three referenced papers?” Yes.
“Can you please briefly describe the pyrheliometric technique, and how the historic data samples are obtained?”
The technique uses pyrheliometers to look at the sun on clear days. Measurements are made at air mass 5, 4, 3, and 2. The ratios 4/5, 3/4, and 2/3 are found and averaged. The number gives a relative measure of atmospheric transmission and is insensitive to water vapor amount, ozone, solar extraterrestrial irradiance changes, etc. It is also insensitive to any changes in the calibration of the instruments. The ratioing minimizes the spurious responses leaving only the responses to aerosols.
I have data for about 30 locations worldwide going back to the turn of the century. Preliminary analysis shows no trend anywhere, except maybe Japan. There is no funding to do complete checks.

Bellman

I take it that if you say there has been no statistically significant warming since May 1993, you mean there has been statistically significant warming since April 1993.
It really doesn’t make sense to specifically choose a start date just because it doesn’t show significance.

Bellman

It really doesn’t make sense to specifically choose a start date just because it doesn’t show significance.

I am surprised you do not understand: The “start date” of the analysis of “the pause” is “today.”
The analysis – as it does every month – starts at “today” date and “today’s” temperature data.
THEN, the analysis goes backwards in time until a statistically significant change is detected. That change (for this month) happens to become May 1993. So, yes, April 1993 was statistically cooler than March 2016.
The “end date” of the report is NEVER chosen by the reporter, but by the data trend itself.

Bellman

RACookPE1978:

I am surprised you do not understand: The “start date” of the analysis of “the pause” is “today.”

Sorry, I assumed that when the article said the trend was from “May 1993 to Feb 2016” it meant the trend starting in May. But if you prefer I’ll change my statement to
“It really doesn’t make sense to specifically choose an end date just because it doesn’t show significance.”

Bellman

Sorry, I assumed that when the article said the trend was from “May 1993 to Feb 2016” it meant the trend starting in May. But if you prefer I’ll change my statement to
“It really doesn’t make sense to specifically choose an end date just because it doesn’t show significance.”

You still don’t get it, do you?
No. Exactly the opposite of the second statement. Nobody “choses” an end date. The analysis “chooses” a statistically significant flat line. Then, the “flat line” is run backwards from today’s date (using February 2016 temperature data in this report). Whenever the two lines (temperature vs time and a flat line vs time) intersection is, becomes the “end date”.

Bellman

An analogy might help. Say I want to test if a particular drug improves the recovery rate for some hypothetical disease. I test the drug on 30 patients and find the recovery rate is significantly improved.
But then I remove one patient at random from the group and see if the recovery rate is still significant. If it is I remove another patient and so on until I find a small enough group for which the results are no longer significant. Say this happens when there are only 23 patients in the group.
Is it sensible to draw any conclusions from the fact that for a group of 23 patients the results were not statistically significant?

seaice1

RACookPE1978.
“The “end date” of the report is NEVER chosen by the reporter,”
We agree about this at least, if about little else. If the pause is no more, it is no more. We cannot go back into the data, pick the end date, and say “there was a pause from date A to date B.”
At least, this is not the same as “the pause”.

Mark

Sea ice
“The “end date” of the report is NEVER chosen by the reporter,”
We agree about this at least, if about little else. If the pause is no more, it is no more. We cannot go back into the data, pick the end date, and say “there was a pause from date A to date B.”
At least, this is not the same as “the pause”.”
Strange, you seem fine with Cherry picking that agrees with your outlook. Tree rings, Ice volume in the arctic from 79 when there is data from earlier that decade that shows Arctic sea ice lower than today.
Worse still, when the cheery pick is not good enough, attempts to re write history have been employed, Hansen essentially dismissing the work of meteorologists in their thousands by re writing the historical temp record for half the planet and the US. Schmidt has oversaw even bigger changes and the attempts to erase the LIA and MWP and even RWP.
Funny how you only see these things when it suits.
Anyhow in the context of CO2 induced warming, being a supposed greater force than natural variability then yes statistical pauses of over 10 years are significant in that context.
But you like others try separate the statistical analysis from the AGW theory when making your point about cherry picking pauses.

seaice1

“Ice volume in the arctic from 79 when there is data from earlier that decade that shows Arctic sea ice lower than today.” Ice extent, I think you mean. The data before 1979 used a different satellite with a single transmitter, which means the data are not directly comparable. There are good reasons for picking 1979 that have nothing at all to do with the data generated.
“But you like others try separate the statistical analysis from the AGW theory when making your point about cherry picking pauses.”
One thing at a time, otherwise they get muddled.

Bellman

RACookPE1978

You still don’t get it, do you?

No I don’t get what you are trying to explain.

The analysis “chooses” a statistically significant flat line.

By flat line, I presume you mean one with no trend. If so, what do you mean by it being statistically significant?

Whenever the two lines (temperature vs time and a flat line vs time) intersection is, becomes the “end date”.

You’ve completely lost me. Isn’t temperature vs time the same as the flat line?

By flat line, I presume you mean one with no trend. If so, what do you mean by it being statistically significant?

See the quote above:
“CI from -0.022 to 1.764”
So the range from -0.022 to 1.764 clearly includes zero. So if a slope of zero is possible since May 1993 at the 95% level, the warming since May 1993 is not statistically significant.

Bellman

So the range from -0.022 to 1.764 clearly includes zero. So if a slope of zero is possible since May 1993 at the 95% level, the warming since May 1993 is not statistically significant.

I agree, but that doesn’t mean the zero slope is statistically significant.

Bellman:

“…Is it sensible to draw any conclusions from the fact that for a group of 23 patients the results were not statistically significant?”

You ‘reduce’ a starting sample till you find a trend you desire??
Ah, the classic method of the “Xx% of scientists believe in AGW” falsehood.
–a) Done specifically for finding a tailored result.
–b) Direct evidence of confirmation bias where one searches data till they find the result they prefer.
Never is such a method sensible.
To begin with, starting with a sample of 23 or 25 is too small to seriously consider statistical significance. As you yourself point out, just a change of one sample can seriously change any/all derived analysis. Making any small sample derivations impossible to reliably replicate without additional sample fiddling.
Yes, 23 or 25 years is a small sample; but that is not the sample. The sample is since temperature recording began. What is derived is a trend of years of ‘X.x’ change with a statistical significance.
A person looking at all of the years recorded may decide to ‘check’ trend beginning at ‘X’, but that accomplishes very little when considering current temperature impacts.
Examples are trends from 1900 to 1940, or 1940 to 1960. These are trends that allow scientists to evaluate question considering cycles, natural or unnatural.
Such small trends are illusory in that scientists, even today, do not know just how global weather truly works.
Sure scientists may have ideas about some parts of the pictures, but it will require decades of observations tracking specific intricately detailed climate inputs to build and prove those simple concepts.
What drives what? When? and Why?
Current means impacts to today! Start with today and reach back.
That brings up trends like, 1850 to today; 1900 to today or 1998 to today.
Following the CAGW scam con-artists claiming catastrophic or even dangerous warming from CO2; as CO2 increases in the atmosphere, temperature should also increase, in near lockstep.
When the trend beginning today reaches back 18 to 23 years in the past without significant statistical warming, the whole CO2 – CAGW destructive warming concept is toast.
Statistical significance is unwieldy when discussing temperatures.
While the satellite temperatures are truly global and precise, accuracy is still being vetted and the seriously well tracked timeframe is very short.
Satellite temperature history can not be derived prior to satellites, and even then should only be derived from the latest deployed satellite equipment. Change of equipment means a change in input.
Accuracy depends upon know the error rates for every data handling from satellite pickup to final number. Every time data is collected, calculated, converted, handled (e.g. transmitted, received, stored, retrieved) there is a chance of error. Properly attended to, these errors are identified, tested and quantified before initial data is collected!
Engineers, well know how to aggregate errors into a final sum as their careers and social responsibilities will charge them with malfeasance for miscalculations that cause loss of life, profit or money.
Land temperatures are very suspect.
–a) There is no error rates for data collection, storage, handling or adjustments.
–b) Some folks believe that error rates can be averaged, e.g. high errors are offset by low errors.
–c) Claiming to ‘correct’ data errors has allowed and currently allows temperature data handles unlimited license to adjust historical data!
In the engineering sense, every adjustment add additional handling errors for all data handlings! e.g. retrieval, adjustment, storage.
In a scientific sense, any ‘adjustment’ entails a handling error equal to the adjustment! After all, why make a change unless there was an assumed error?
Worse, are assumed errors where data is changed without knowing the actual specific error. e.g. ‘Time of observation’ or ‘time of day’, (TOB, TOD) corrections where temperatures are adjusted without explicit knowledge of actual error.
There have been prior WUWT discussions regarding what compromises proper error tracking and calculations for land temperatures. It is worth searching them out and following their discussion.
Check out Anthony’s Surface Stations project for just a simple overview of the horrors used for temperature calculations. What is especially significant is that none of the errors identified are quantified into error bars by NOAA for incorporation into their ‘global temperature’ anomalies.
It is quite likely that NOAA’s temperature claims would be completely swamped by known but ignored errors, making any claim to global land temperature averages impossibly confused. That global temperature trend since 1850 with a result of ‘X.x’ may well have an attached error rate of ‘Xⁿ’

Bellman-
Wow…just….wow.
The flaws in your analogy…
1) in your scenario, you claimed a “significant recovery” result…which is the antithesis/opposite of “no significant recovery” (so your scenario does not reflect or represent the actual situation at all…it’s the opposite of it)
2) If your scenario was accurate, then all 30 of your patients showed a statistically INsignificant improvement after using your drug. So you cannot remove 7 of them (or any number of years) and come up with a group that is statistically different from the original group!!!
“I take it that if you say there has been no statistically significant warming since May 1993, you mean there has been statistically significant warming since April 1993. ”
Huh? No, that makes no sense. What he means is that statistically significant warming was occurring through April of 1993, but then stopped in May, one month later. There has been no statistically significant warming since then.
“It really doesn’t make sense to specifically choose an end date just because it doesn’t show significance.”
*bangs head on desk*
The end date is the termination line between statistically insignificant and statistically significant. That date is specific BECAUSE it marks the change between the two things.
When determining how long a “trend” has lasted, one can only include the data that matches that trend. For example…I want to determine how long UncleJoe has been dead. I must go backwards in time to the last time we know he was ALIVE. Let’s say that date was in May of 1993. The date that his life “ended” is SIGNIFICANT to determining how long his death trend is relative to today’s date. (no pun intended) Now, I can say it one of two ways and be accurate either way. I could state that in May of 1993, Uncle Joe STARTED a “death trend” or a “non-living trend”, OR I could say that his former “LIFE trend” ENDED on that date. It’s the same thing either way.
Should Uncle Joe suddenly rise from the dead tomorrow as if nothing happened between 1993 and tomorrow, I could then say something like “His life trend “paused” in 1993 and that pause continued until April 8th, 2016, upon which his former life trend resumed again” with perfect accuracy. OR I could say that his former life trend ENDED in 1993 and he STARTED a death trend at the same time which continued until April 2016, at which point his death trend ended and he began a new life trend.
Don’t confuse statistical significance with regular old significance as in “important” outside of statistics.

I agree, but that doesn’t mean the zero slope is statistically significant.

What it means is that there is a greater than 2.5% chance that the actual slope is negative.

Bellman

Aphan
Aphan

Wow…just….wow.
The flaws in your analogy…

in your scenario, you claimed a “significant recovery” result…which is the antithesis/opposite of “no significant recovery” (so your scenario does not reflect or represent the actual situation at all…it’s the opposite of it)

Maybe my analogy wasn’t as clear as it could have been.
By “significant recovery” with 30 patients I meant a statistically significant result,
which is analogous to 30 years of statistically significant warming to the current month.
By removing patient’s from the group until the result was not significant, I was alluding to the technique of removing months from the start of the trend period until you find the trend has become statistically insignificant.

If your scenario was accurate, then all 30 of your patients showed a statistically INsignificant improvement after using your drug. So you cannot remove 7 of them (or any number of years) and come up with a group that is statistically different from the original group!!!

Yes you can. Thats effectively whats happening in the following statement:

Huh? No, that makes no sense. What he means is that statistically significant warming was occurring through April of 1993, but then stopped in May, one month later. There has been no statistically significant warming since then.

If the period from April 1993 – February 2016 shows statistically significant warming than that’s a statement about the entire period, not just about the first month. If you remove the first month and the rise is no longer significant that does not mean anything has changed in the remaining 23 years, it could just mean you no longer have enough data. The trend line might go up, but the confidence interval increases.
It’s the same with the patients. The recovery rate might be identical for each patient, but having a smaller sample size increases the confidence intervals and so reduces the likelihood of a statistically significant result.

Wow…just….wow.
The flaws in your analogy…

You could have saved a lot of time by just admitting you don’t understand how statistical significance works.

seaice1

“1. The “Pause” hasn’t disappeared. It now just has a beginning and an end. But it is right there in the data where it always was, and it doesn’t cease to exist merely because we can’t calculate one starting from the present and working backwards.”
Here is how I see it. The pause as defined by Monckton and widely quoted as “the pause” is either there or it is not. The data still exist, and there will always be two dates 18 yrs an 3 months apart that you can draw a line between and find zero slope. However – unless you go back from today that is not “the pause”
You can go back to the start and end dates of the flat bit and you will find they start with one very large El Nino and end with another very large El Nino. If we were to pick these dates from the record to make any sort of claim about the longer term trend it would be cherry picking. Why should we choose this particular dates? The Monckton pause gave the illusion of not cherry picking by using todays date as a a start point, which can be argued to be non-arbitrary.
“In all three scenarios above, natural variability dominates in terms of any risk associated with a changing global temperature. That’s what we should be studying first and foremost.”
yesterdays post about the IPCC definition of climate change pointed out that they refer to all causes of climate change – natural and man-made. We are studying natural variation. Why this should have been seen as a problem yesterday I could not see at the time, and still do not see.
“The world has been warming for 400 years, almost all of it due to natural variability.” Natural variability is not a cause. The worlkd has been getting dark and light every 24 hours due to natural variability. If we want to understand why this happens we must look for the cause. No longer do we think a chariot pulls the sun through the sky. We understand the causes of the variability. Similarly with climate. We cannot say “natural variability” and leave it at that, for this is not an explanation.

Bartemis

“We cannot say “natural variability” and leave it at that, for this is not an explanation.”
Sure, it’s an explanation, just not a very detailed one. But, lack of a detailed alternative hypothesis is not evidence in favor of accepting a given hypothesis. Just because a primitive villager does not know about the germ theory of disease is no reason for him to accept the shaman’s theory that the gods are punishing him.

seaice1
“Here is how I see it. The pause as defined by Monckton and widely quoted as “the pause” is either there or it is not. The data still exist, and there will always be two dates 18 yrs an 3 months apart that you can draw a line between and find zero slope. However – unless you go back from today that is not “the pause”
Stop right there. What the heck? That makes no sense. If there will “always be two dates 18 years and 3 months apart that you can draw a line between and find a zero slope” then that range can logically and rationally “always” be referred to as “the pause” until ANOTHER pause comes into existence at which point it becomes necessary to distinguish between them!! When we talk about the “Great Depression” does it mean that it is “either there or it is not”? Unless we go back from today, is it not “the depression”?
In the past, if Monckton was talking about a pause while that pause was occurring, then of course he’d call it “the pause”. Because it was “the” only pause he was talking about it. He didn’t have to refer to it as “the current pause” because no one in the current climate change debate had ever referred to a “former pause”. If it has ended, no one today has to refer to it as “the old pause” or “the former pause” until there is another “pause” because no one here is going to be confused about WHICH PAUSE (and it’s exact time frame) is be talking about! (no matter what you say about it Captain Semantics)
“The world has been warming for 400 years, almost all of it due to natural variability.” Natural variability is not a cause. ”
You are 100% correct. Natural variability is defined as- variations that occur (are caused by) from natural factors. Those natural FACTORS are the cause….natural variability is WHAT they cause. So let’s insert that into his sentence-
“The world has been warming for 400 years, almost all of it due to… variations from (or caused by) natural factors.”
See how easy it was to determine that maybe, just maybe, he wasn’t even remotely saying what you just had to insinuate that he was? Did it occur to you that maybe Werner Brozak didn’t READ yesterday’s post here at WUWT, or that he might see a real difference between the “IPCC referring to all causes of climate change” and actually STUDYING the natural variations?

george e. smith

If the ‘start’ date is today, and the ‘end’ date is last month (yes you only have two data points), then a straight line joining today’s temperature, and last month’s temperature , could have a sizeable slope, either positive or negative. And each temperature has some error bar, based on the ability to measure it accurately.
From your set of data points (in this case only two), you calculate the standard deviation, and using the ordinary rules of statistics, you can calculate the significance or insignificance of that line slope. With only two points close together it would take quite a large slope to be considered significant. Notice the planet could care less about the slope, it is the reporter who thinks it is significant or not to some often accepted standard.
As the ‘end’ date moves back in time giving more data points the standard deviation changes, and the amount of slope that is still regarded as insignificant, will get smaller and smaller, so we regard it as for all practical purposes to be zero.
Eventually this process of backing up the ‘end’ date will lead a statistically best line that no longer has a slope that is statistically zero or near enough.
I forget what level of significance M of B uses but I think it is 95% confidence level or something like that.
I don’t pay attention to that because I trust that Christopher knows what he is doing.
Others might be looking for a calculated slope accurate to eight significant digits. So they wouldn’t consider the near zero slope to be insignificant. The level of significance that matters, is chosen by the reporter.
The planet could care less what the reporter thinks; it doesn’t pay any attention at all to the data. It is all in the past anyhow, so nothing is going to happen as a result of computing this month’s trend line; it is already too late for the planet to do anything about it, even if it could.
The planet acted in real time as temperatures changed, and it isn’t going to do anything different, just because somebody calculated a trend line from the numbers. The planet cannot and will not follow any trend line that somebody calculated. Nor will it give ANY clues as to what it will do next, but it will do whatever it is that can happen next and nothing else.
G

I forget what level of significance M of B uses but I think it is 95% confidence level or something like that.

For his pauses, he did not consider any %. He just went by the furthest back you can go and get a negative slope.

Werner:
Oh, isn’t that fascinating the “pause ” actually extends in length even after the hottest year in the instrumental record and the hottest month ever in the RSS record!
Please give it even I am embarrassed for you.
As Nick says TLT is on the way out or at least will be “adjusted” up in RSS v4.0.
Also – do you not agree that if the last 19 years since the 97/98 nino is valid as a trend.
Then so is the 19 year period up to it ?
Obviously.
So look here please….
http://woodfortrees.org/graph/rss/from:1978/to:1997/trend/plot/rss/plot/rss/from:1997/to:2015.8/trend/plot/rss/trend
Drawing a trend-line through the whole period we have the purple one.
Drawing a trend line through the 1st 19 years we have the red one
Drawing a trend line through the last 19 years we have the blue one.
Notice anything?
I’ll explain if not…..
Extend the red line to the right.
Where does it cross the blue line.
Somewhere around 2025 is where.
SO what does that tell us?
That in order that there to have been a “pause” – the blue line should have been BELOW the (extended) red one at all times.
What is shown there is a STEP-UP in temps that will not fall back to the initial trend for another 9 years.
Of course there is NO step-up…. just as there is NO “pause”.
BOTH are just an artifact of curve-fitting cherry-picking.
And that analysis makes a nonsense of your ( and Moncktons ) claims.
[??? .mod]
[Another mod here, also wondering why that comment makes no sense. -mod.]

Khwarizmi

Given that
i) a system that hasn’t reached thermal equilibrium doesn’t have a meaningful temperature, and that
ii) temperature is a poor measure of heat content anyway, and that
iii) the mathematical abstraction called “global average temperature” has no value in predicting the kind of weather we experience…
* * * * * *
Apr 6, 2016
KAMLOOPS, B.C. – Predictions of slushy, El Nino-dampened ski seasons were snowed under across British Columbia this winter as many resorts celebrated one of their most successful years.
http://www.680news.com/2016/04/06/b-c-ski-resorts-sidestep-el-nino-knockout-celebrate-remarkable-seasons/
* * * * * *
Why should we worry about it?
Is the abstraction useful for anything?

Tom in Florida

Only in attracting grant money.

Mark

If CO2 is supposed to have an effect of a greater order than natural variability then there should be no statistical flat trends for 18 or 20 years in data.
So while you try take it out of context, there is must remain in the context of the claims that CO2 is driving temperatures up.
Or more accurately, your post if full of fail

BOTH are just an artifact of curve-fitting cherry-picking.
And that analysis makes a nonsense of your ( and Moncktons ) claims.

No, it does not and here is why. Suppose we measured the height of a 40 year old man every year on his birthday and plotted the results. Now suppose he stopped growing on his 20th birthday. A line from his 20th birthday would be horizontal with no slope, but a line from age 0 to age 40 would be positive. But that would not mean he continued growing after the age of 20.

This is why I just love to watch the CAGW extremists tying themselves in logical knots with obviously stupid statistics. It is also why Ross McKitrick and colleagues have lifetimes of debunking fun ahead of them. You could make a whole sub-discipline out of correcting climate pseudoscience statistical howlers.

Bellman

Suppose we measured the height of a 40 year old man every year on his birthday and plotted the results.

That would be great analogy if temperature increased monotonically year on year at a steady rate. If your 40 year old man had changed height in the same way temperature does I’d be very worried for him.
When you have noisy data like temperature you simply cannot work out when or if a change in growth took place without doing some serious statistics.

When you have noisy data like temperature you simply cannot work out when or if a change in growth took place without doing some serious statistics.
And since people like Mann, Jones and Trenberth have demonstrated that they don’t understand basic statistics, what they’re left with is “noisy data” that is magnified by an axis that shows tenth and hundredth of a degree divisions.
It’s all bogus. But if they used one degree divisions, the whole world would see something so mundane and un-alarming that it would cause a mass yawning event:
http://i1.wp.com/www.powerlineblog.com/ed-assets/2015/10/Global-2-copy.jpg

Bellman

It’s all bogus. But if they used one defgree divisions, the whole world would see something so mundane and un-alarming that it would cause a mass yawning event:

That’s funny, but you should really display the graph in Kelvin for maximum effect.

When you have noisy data like temperature you simply cannot work out when or if a change in growth took place without doing some serious statistics.

Serious statistics may have given a slightly different pause in January, but my guess is that it would still have been at least 15 years if not 18 years.

How clever and hilariously amusing
dbstealey posts a graph that flattens the y-axix for an ave global temp v time plot and therefore (down the rabbit-hole) falsifies AGW.
OK, then I expect you’d appreciate the BP/heart rate monitor on your in resus having a squashed y-axis, eh? (no offence – a dramatic example is all)
And irony of ironies that that graph was first posted by Brandon Gates as a tongue in cheek wind-up of denizens on here.

The anonymous Toneb says:
…how amusing…&blah, blah, etc.
No matter what chart I post, the ignoratii can find something to oject to. In fact, I’ll bet that anonymous ignoramus can find something to panic about here:comment image
And for a much longer time period, we see that global T is right at its lowest point in billions of years:
http://www.kogagrove.org/sams/agw/images/paleomap.png
(click in charts to embiggen)
And we would expect an ignoramus to complain about a Kelvin chart. So here’s a NASA/Giss chart showing the same thing, except in ºF:comment image
Next, here’s a chart showing that our planet is currently at the cold end of historical temperatures:
http://www.newscientist.com/data/images/archive/2839/28392301.jpg
Global T has been more than 10ºC WARMER than now — with no adverse effects.
And this chart puts all the wild-eyed, Chicken Little clucking in perspective:comment image
We see why the deceptive alarmist crowd uses tenth and hundredth of a degree divisions: those tiny divisions magnify what is simply natural climate variability:comment image
Here’s another geologic chart, showing that the Earth is currently at the cold end of its natural temperature range:
http://whatreallyhappened.com/IMAGES/GeoColumn.gif
Even during the Holocene, the planet has been much warmer than now:
http://snag.gy/BztF1.jpg
During the past century, temperatures have also been much warmer than now:
http://2.bp.blogspot.com/_b5jZxTCSlm0/Sv31ZY99ioI/AAAAAAAAD38/zHZkCLYg590/s1600-h/image017.png
(click in icon to embiggen)
Finally, here is a chart of global temperature changes since the mid-1800’s:
http://catallaxyfiles.com/files/2012/05/Mean-Temp-1.jpg
That one is in ºC. It shows that global T is simply not changing, despite the endless predictions of the alarmist cult.
They got their basic premise wrong: CO2 does not measurably raise global T.
So who are we gonna believe? The alarmist cult?
Or Planet Earth?
Because only one of them is right.

Werner:
As someone has said Werner – height, we can be sure, increases in one direction ONLY (up) incrementally. GMT does not.
Answer my critique of why it is a step-up as the “pause£ (~zero) trend-line does not cross the trend leine for the first 19 years of the RSS series until 2025.
It is a nonsense and you should be embarrassing “true” sceptics .

Bartemis

“As someone has said Werner – height, we can be sure, increases in one direction ONLY (up) incrementally. GMT does not.”
Actually, my height took a quick step up somewhere in the early teens, then leveled off, and stopped completely around 20 or so. Lately, it has been declining. Eventually, I expect it to end about 6 feet below the moment-of-birth baseline.

That in order that there to have been a “pause” – the blue line should have been BELOW the (extended) red one at all times.

Why did you end your red line where you did? Ending it 18 months later would have given a completely different picture as you can see below by comparing the green and blue lines.
http://www.woodfortrees.org/plot/rss/from:1978/plot/rss/from:1978/to:1997.5/trend/plot/rss/from:1978/to:1999/trend

Bellman-
“When you have noisy data like temperature you simply cannot work out when or if a change in growth took place without doing some serious statistics.”
Serious statisticians doing serious statistics have determined that there has been NO statistically significant increase in global temperatures since 1993! And yet, you seem to be a global statistics denier. Why?

Bellman-
“That’s funny, but you should really display the graph in Kelvin for maximum effect.”
Like this?
http://s27.postimg.org/68cs7z8wj/Hadcrut4_Kelvin_1850_to_2013.png

Bellman

Aphan:

Serious statisticians doing serious statistics have determined that there has been NO statistically significant increase in global temperatures since 1993! And yet, you seem to be a global statistics denier. Why?

You don’t need to be a serious statistician to check that there’s been no statistically significant warming since 1993. In fact using the Cowtan tool I’d say it could be pushed back to mid 1992.
What that doesn’t tell you is if there has been a change in trend, or when that occurred.

Bellman-
“That’s funny, but you should really display the graph in Kelvin for maximum effect.”
Like this?

Yes, that’s the joke.

In fact using the Cowtan tool I’d say it could be pushed back to mid 1992.

Correct me if I am wrong, but I believe Cowtan uses 2 sigma which is 95.45% but Nick uses 95%.

Bellman

Werner Brozek

Correct me if I am wrong, but I believe Cowtan uses 2 sigma which is 95.45% but Nick uses 95%.

That’s correct, but I don’t think that would account for most of the difference.
I assume the differences are more due to what techniques are used to correct for autocorrelation.

Daryl S.

I think this analogy would make more sense if people grew in some years, but shrunk in others with an upward trend until the 20s.

BOTH are just an artifact of curve-fitting cherry-picking.
Pure projection, from the same crowd that’s trying to explain why Planet Earth is debunking their fantastic scares.
And regarding RACook, it’s at least somewhat satisfying that they still admit they don’t get how a (so-called) ‘pause’ is calculated.
I suspect they do get it — but if they admitted that, what follows is too uncomfortable for them to bear. They would have to climb down from their new talking points; that satellite data is NFG, and that there was never any ‘pause’.

Bellman

And regarding RACook, it’s at least somewhat satisfying that they still admit they don’t get how a (so-called) ‘pause’ is calculated.

Which so called “pause” are you talking about?
Is it Moncktons so called Great Pause, based on finding the longest negative trend from the current date?
Is it the pause described in this post that “hasn’t disappeared. It now just has a beginning and an end”?
Is it the longest period you can find that has a non-significant trend?
Or is it what RACook describes which involves finding the intercept between a flat line a temperature trend?
I think I know how all of these are calculated except the last one.

dbstealey:
“BOTH are just an artifact of curve-fitting cherry-picking.
Pure projection, from the same crowd that’s trying to explain why Planet Earth is debunking their fantastic scares.”
If you are referring to my posted demonstration of the stupidity of both what Werner and Monckton (and me re a step-up) say – would you care to say something intelligent instead of another hand-wave dismissal.

catweazle666

Bellman April 7, 2016 at 12:15 pm
“And regarding RACook, it’s at least somewhat satisfying that they still admit they don’t get how a (so-called) ‘pause’ is calculated.
Which so-called “pause” are you talking about?

The pause that all these eminent climate scientists tied themselves in knots trying to explain?
Here are a few quotes for the ‘pause/hiatus’ deniers who always drip their anti-scientific nonsense on any climate change blog.

Dr. Phil Jones – CRU emails – 5th July, 2005 – “The scientific community would come down on me in no uncertain terms if I said the world had cooled from 1998. OK it has but it is only 7 years of data and it isn’t statistically significant….”
__________________
Dr. Phil Jones – CRU emails – 7th May, 2009 – ‘Bottom line: the ‘no upward trend’ has to continue for a total of 15 years before we get worried.’
__________________
Dr. Judith L. Lean – Geophysical Research Letters – 15 Aug 2009 – “…This lack of overall warming is analogous to the period from 2002 to 2008 when decreasing solar irradiance also countered much of the anthropogenic warming…”
__________________
Dr. Kevin Trenberth – CRU emails – 12 Oct. 2009 – “Well, I have my own article on where the heck is global warming…..The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.”
__________________
Dr. Mojib Latif – Spiegel – 19th November 2009 – “At present, however, the warming is taking a break,”…….”There can be no argument about that,”
__________________
Dr. Jochem Marotzke – Spiegel – 19th November 2009 – “It cannot be denied that this is one of the hottest issues in the scientific community,”….”We don’t really know why this stagnation is taking place at this point.”
__________________
Dr. Phil Jones – BBC – 13th February 2010 – “I’m a scientist trying to measure temperature. If I registered that the climate has been cooling I’d say so. But it hasn’t until recently – and then barely at all. The trend is a warming trend.”
__________________
Dr. Phil Jones – BBC – 13th February 2010
[Q] B – “Do you agree that from 1995 to the present there has been no statistically-significant global warming”[A] “Yes, but only just”.
__________________
Prof. Shaowu Wang et al – Advances in Climate Change Research – 2010 – “…The decade of 1999-2008 is still the warmest of the last 30 years, though the global temperature increment is near zero;…”
__________________
Dr. B. G. Hunt – Climate Dynamics – February 2011 – “Controversy continues to prevail concerning the reality of anthropogenically-induced climatic warming. One of the principal issues is the cause of the hiatus in the current global warming trend.”
__________________
Dr. Robert K. Kaufmann – PNAS – 2nd June 2011 – “…..it has been unclear why global surface temperatures did not rise between 1998 and 2008…..”
__________________
Dr. Gerald A. Meehl – Nature Climate Change – 18th September 2011 – “There have been decades, such as 2000–2009, when the observed globally averaged surface-temperature time series shows little increase or even a slightly negative trend1 (a hiatus period)….”
__________________
Met Office Blog – Dave Britton (10:48:21) – 14 October 2012 – “We agree with Mr Rose that there has been only a very small amount of warming in the 21st Century. As stated in our response, this is 0.05 degrees Celsius since 1997 equivalent to 0.03 degrees Celsius per decade.”Source: metofficenews.wordpress.com/2012/10/14/met-office-in-the-media-14-october-2012
__________________
Dr. James Hansen – NASA GISS – 15 January 2013 – “The 5-year mean global temperature has been flat for a decade, which we interpret as a combination of natural variability and a slowdown in the growth rate of the net climate forcing.”
__________________
Dr Doug Smith – Met Office – 18 January 2013 – “The exact causes of the temperature standstill are not yet understood,” says climate researcher Doug Smith from the Met Office.[Translated by Philipp Mueller from Spiegel Online]
__________________
Dr. Virginie Guemas – Nature Climate Change – 7 April 2013 – “…Despite a sustained production of anthropogenic greenhouse gases, the Earth’s mean near-surface temperature paused its rise during the 2000–2010 period…”
__________________
Dr. Judith Curry – House of Representatives Subcommittee on Environment – 25 April 2013 – ” If the climate shifts hypothesis is correct, then the current flat trend in global surface temperatures may continue for another decade or two,…”
__________________
Dr. Hans von Storch – Spiegel – 20 June 2013 – “…the increase over the last 15 years was just 0.06 degrees Celsius (0.11 degrees Fahrenheit) — a value very close to zero….If things continue as they have been, in five years, at the latest, we will need to acknowledge that something is fundamentally wrong with our climate models….”
__________________
Professor Masahiro Watanabe – Geophysical Research Letters – 28 June 2013 – “The weakening of k commonly found in GCMs seems to be an inevitable response of the climate system to global warming, suggesting the recovery from hiatus in coming decades.”
__________________
Met Office – July 2013 – “The recent pause in global warming, part 3: What are the implications for projections of future warming?………..Executive summaryThe recent pause in global surface temperature rise does not materially alter the risks of substantial warming of the Earth by the end of this century.”
Source: etoffice.gov.uk/media/pdf/3/r/Paper3_Implications_for_projections.pdf
__________________
Professor Rowan Sutton – Independent – 22 July 2013 – “Some people call it a slow-down, some call it a hiatus, some people call it a pause. The global average surface temperature has not increased substantially over the last 10 to 15 years,”
__________________
Dr. Kevin Trenberth – NPR – 23 August 2013 – “They probably can’t go on much for much longer than maybe 20 years, and what happens at the end of these hiatus periods, is suddenly there’s a big jump [in temperature] up to a whole new level and you never go back to that previous level again,”
__________________
Dr. Yu Kosaka et. al. – Nature – 28 August 2013 – “Recent global-warming hiatus tied to equatorial Pacific surface coolingDespite the continued increase in atmospheric greenhouse gas concentrations, the annual-mean global temperature has not risen in the twenty-first century…”
__________________
Professor Anastasios Tsonis – Daily Telegraph – 8 September 2013 – “We are already in a cooling trend, which I think will continue for the next 15 years at least. There is no doubt the warming of the 1980s and 1990s has stopped.”
__________________
Dr. Kevin E. Trenberth – Nature News Feature – 15 January 2014 – “The 1997 to ’98 El Niño event was a trigger for the changes in the Pacific, and I think that’s very probably the beginning of the hiatus,” says Kevin Trenberth, a climate scientist…
__________________
Dr. Gabriel Vecchi – Nature News Feature – 15 January 2014 – “A few years ago you saw the hiatus, but it could be dismissed because it was well within the noise,” says Gabriel Vecchi, a climate scientist…“Now it’s something to explain.”…..
__________________
Professor Matthew England – ABC Science – 10 February 2014 – “Even though there is this hiatus in this surface average temperature, we’re still getting record heat waves, we’re still getting harsh bush fires…..it shows we shouldn’t take any comfort from this plateau in global average temperatures.”
__________________
Dr. Jana Sillmann et al – IopScience – 18 June 2014 – Observed and simulated temperature extremes during the recent warming hiatus“This regional inconsistency between models and observations might be a key to understanding the recent hiatus in global mean temperature warming.”
__________________
Dr. Young-Heon Jo et al – American Meteorological Society – October 2014 -“…..Furthermore, the low-frequency variability in the SPG relates to the propagation of Atlantic meridional overturning circulation (AMOC) variations from the deep-water formation region to mid-latitudes in the North Atlantic, which might have the implications for recent global surface warming hiatus.”

So tell us Bellman, what do you THINK YOU know that Dr. Phil Jones. Dr. Judith L. Lean, Dr. Kevin Trenberth, Dr. Mojib Latif, Dr. Jochem Marotzke, Prof. Shaowu Wang, Dr. B. G. Hunt, Dr. Robert K. Kaufmann, Dr. Gerald A. Meehl, Dave Britton, Dr. James Hansen, Dr Doug Smith, Dr. Virginie Guemas, Dr. Judith Curry, Dr. Hans von Storch, Professor Masahiro Watanabe, Professor Rowan Sutton, Dr. Yu Kosaka, Professor Anastasios Tsonis, Dr. Gabriel Vecchi, Professor Matthew England, Dr. Jana Sillmann and Dr. Young-Heon Jo Don’t?

Bellman

catweazle666:

So tell us Bellman, what do you THINK YOU know that Dr. Phil Jones. Dr. Judith L. Lean, Dr. Kevin Trenberth, Dr. Mojib Latif, Dr. Jochem Marotzke, Prof. Shaowu Wang, Dr. B. G. Hunt, Dr. Robert K. Kaufmann, Dr. Gerald A. Meehl, Dave Britton, Dr. James Hansen, Dr Doug Smith, Dr. Virginie Guemas, Dr. Judith Curry, Dr. Hans von Storch, Professor Masahiro Watanabe, Professor Rowan Sutton, Dr. Yu Kosaka, Professor Anastasios Tsonis, Dr. Gabriel Vecchi, Professor Matthew England, Dr. Jana Sillmann and Dr. Young-Heon Jo Don’t?

Are good, argument by authority. You’ll be calling me a pause denier next.
Seriously though, I’m quite prepared to believe there’s been a pause, I just think it needs to be determined exactly what people mean when they say there has been a pause, and then provide evidence that this is statistically significant – just as I’d want to see statistically significant evidence that there’s been warming.
As to that very long list, some I probably would disagree with, others are agreeing that there hasn’t been a statistically significant change. Most I’d guess are using word like haitus to describe apparent changes in trends without claiming this is a proven change in the underlying warming.

catweazle666

Bellman: “Are good, argument by authority.”
What do you base your arguments on if not from authority?
Certainly not from any understanding of the science, that’s certain.

Bellman

catweazle666

What do you base your arguments on if not from authority?
Certainly not from any understanding of the science, that’s certain.

I agree with that, sorry if my sarcasm wasn’t obvious. I try to avoid commenting on matters of science as I would happily defer to the expertise of actual authorities.
But the pause is more about statistics and semantics than science. What do you actually mean by a pause or hiatus, and can you show that there exists a statistically significant pause using your definition.
In any event, your list is not exactly a list of peer reviewed papers proving the existence of the pause. It’s just quote mining, and I cannot see any reference to the sorts of pause described by Monckton or RACook.

Toneb,
You go all ad hom on people who have written numerous articles here, under their own name. You could write an article yourself. But you won’t.
Why not?
First, if you did, your fuzzy-headed, confused thinking about the so-called ‘pause’ would be demolished in short order, by the same people you’re trying to denigrate.
And second, you would have to man up and idfentify yourself. The opinions of an anonymous coward carrly little weight here. So you have a choice:
Come down out of the shadows of the peanut gallery, and state your position in an article posted under an identifiable name. Or, continue with your amusing but lame pot shots from the safety of your ‘toneb’ anonymous screen name. One way you get much respect. The other way, you get much deserved derision, but no respect.
Next. I note that catweazel666 has taken up Bellman’s schoolyard challenge, and rammed it so far up his fundament he’ll have to gargle to get it out. Bellman’s responses to that long list were so lame they’re not worth commenting on; he’s just tap-dancing.
Good job, cat.

How is a global mean temperature measured? I had a look at 65 years of temperature measurements in the Pacific El Nino area and (a) there’s the question of methodology and then (b) interpretation – depending on the sample period the current upper ocean temperature there is either an historic high or a correction from a longer-term low period!
Can anything else be measured more reliably, e.g. sea levels (since some Pacific islands are said to be in danger of eventual submersion)?
http://polynesiantimes.blogspot.co.uk/2016/04/pacific-weather-weirding-and.html

I agree with David. The Pause doesn’t mean CO2 doesn’t have an effect on climate (as some would assume), but it does probably mean one of those two things. There are papers supporting both of those assertions, see here and here.

Or, I should’ve said that it probably means one of those things to a lesser extent (perhaps not quite grossly overestimated or underestimated.)

JohnTyler

So here we are again discussing the temperatures over the last 23, 25, 50 years – whatever – and plotting all sorts of lines through (massaged? or not? ) data and calculating R values, t values, etc.
Meanwhile , the Medieval Warm Period – which lasted several hundred years is totally ignored and remains unexplained.
So, what caused the MWP?
Was it caused by “excess” CO2 in the atmosphere?
If so, from where did the CO2 originate?
What caused the MWP to end and what caused the subsequent Little Ice Age?
If the MWP was caused by “too much” CO2, where did it all go thus permitting the onset of the LIA?
Did not eventually the LIA come to an end?
Did it end because CO2 become more abundant?
And during lengthy periods of “abnormally” cold weather, how is it possible that CO2 can become more abundant?
If so, where did all this “excess” CO2 originate such that it caused the LIA to end?
Is not the science settled? Is not the AGW thesis based on first principles ?
If so, WHERE ARE THE EXPLANATIONS FOR THE ABOVE QUESTIONS?
Pray tell, if the historic climate cannot be explained, how can anyone presume to predict the future climate?
If the AGW thesis is based on first principles, why are weather forecasts just 10 days into the future unreliable?
Why is no one asking any of these sorts of questions??
Why is time being spent on statistical analyses of time periods that are incredibly short and meaningless?
And I thought listening to Bernie Sanders talking up the benefits of a “new” form of Marxist-Leninist government, despite a PERFECT, 75 year, 100% record of its failure, was bizarre.

Sweet comment. Thanks.

Bindidon

Meanwhile , the Medieval Warm Period – which lasted several hundred years is totally ignored and remains unexplained.
That’s your private meaning…
So, what caused the MWP?
Maybe this helps: ‘http://www.nap.edu/read/11676/chapter/1’
The leading author is no warmist 🙂
Was it caused by “excess” CO2 in the atmosphere?
No. If that poor CO2 had been the source of it, we would see some traces in the ice cores on Antarctica and Greenland.
One possible explanation for the LIA is a long sequence of huge volcano eruptions, starting around 1257 with the Samalas volcano on Lombok Island, indonesia (the strongest one since thousands of years, dark sky during decades; Laki on Iceland around 1783 was a toy in comparison).

Gloateus Maximus

The LIA, like all the prior cold periods in the Holocene and previous interglacials, was caused by solar variation. It was defined by three or four solar minima, the last two of which were the strong Maunder and Dalton, with warmer counter trend cycles in between.
It could not have been caused by volcanic activity alone, since there has been just as much if not more volcanism during the Modern WP as during the LIA. Krakatoa and Pinatubo spring to mind, but lots of lesser eruptions too. The temperature effects of even the biggest eruptions are short-lived.
The LIA and Modern Warm Period are caused by the same forces as the Dark Ages and Greek Dark Ages Cold Periods of 1500 and 2500 years ago (plus older others back to the 8200 BP Event) and the Medieval, Roman, Minoan, Egyptian and Holocene Optimum WPs of 1000, 2000, 3000, 4000 and 5000 years ago.
CO2 has precious little to do with it. Nothing the least bit out of the ordinary is happening now or has in the past century. If you want to see impressive rapid and prolonged warming, look not at 1977 to 1998, but at the early 18th century recovery cycle from the depths of the Maunder Minimum during the LIA.

Actually, Gerald North is an alarmist. See here and here.

Woops, those links were for Dr. Robert Dickinson, but I have some here and here.

Bindidon

Gloateus Maximus
Slowly but surely it gets really boring all the time to read “CO2 has precious little to do with it. ”
Why do people like you ALWAYS speak about this poor CO2? Did I?
Your assumptions:
– LIA caused by sun minima (Maunder, Koch, Dalton etc etc etc) simply is wrong, as scientists at the Potsdam Climate Institute in Germany have computed their sum causes at best a cooling by 0.3 °C / century;
– There were as many HUGE eruptions during MWP as during LIA is wrong too.
Look at htis list:
http://www.livescience.com/30507-volcanoes-biggest-history.html
And even there you won’t find the Samalas.
Here it is:
http://www.pnas.org/content/110/42/16742.full
A good guy indeed, something like the better known Ilopango 1500 years ago.

Gloateus Maximus

Bindion,
You seriously consider the Potsdam Mafia to be scientists? Thanks for the laugh.
If you had bothered to look at total particulates and sulfates in the ice cores, you’d see that over the centuries of the LIA, they differ little from the two MWPs before and after it. There was a big one in the MWP, in the mid-13th century, but well before the LIA. It was followed by another century of warmth.
As I said, volcanic effects are short-lived, on the scale of years at most, not decades, let alone centuries.

Bindidon-
Pretty much a public meaning-the MWP has been totally ignored in this discussion.
And as far as the question “what caused the MWP” goes, your link demonstrates a whole lot of “it could have been” and absolutely zero “we know exactly what caused it”. There’s been 10 years worth of studies done since that report too that conclude that the MWP was worldwide and warmer than today.
“No. If that poor CO2 had been the source of it, we would see some traces in the ice cores on Antarctica and Greenland.”
And THAT is the rub. All that warming and abrupt climate change without any CO2 surges. And bringing up the LIA does nothing at all to explain the MWP.

justanotherpersonii

You are on the wrong post.

Bindidon says:
One possible explanation for the LIA is a long sequence of huge volcano eruptions, starting around 1257…&etc.
Next question: What caused the Roman Warming Period?
Follow-up to however you wing that one:
What caused the Minoan Warming?
When you’ve fabricated some sort of answer, next question: What caused the Holocene Climate Optimum? Or the Eemian?comment image
Yet the alarmist crowd still aserts that human CO2 emissions are the primary cause of the current warming.
In any branch of the hard sciences, lame arguments like that would have been laughed out of the room by the adults present. But after a century of being unable to back their ‘human CO2’ conjecture with any credible measurements, they’re still trying to convince people they have the answers. They don’t.
And there is nothing either unusual, or unprecedented happening despite all their ridiculous Chicken Little pronouncements:comment image

JohnTyler
Your questions ARE legitimate, and have been asked before.
And – every time they are asked, they are ignored as “An Inconvenient Question” in search of the Truth. Your questions are WHY the Medieval Warming Period was removed with such fanfare and promotion by the IPCC and its allies when Mann published his travesties of the smoothed “hockey stick” temperature record.

Oldseadog

+ 1

John Tyler,
I don’t think you understand the tactics of the climate alarmist crowd. They ask questions; skeptics answer them. Then skeptics ask questions, but the alarmist crowd either deflects the questions, or ignores them, or goes off on another tangent. But they don’t answer questions.
But as we see, there is nothing out of the ordinary happening:
http://vortex.accuweather.com/adc2004/pub/includes/columns/climatechange/2011/590x189_12100149_sc_rss_compare_ts_channel_tlt_v03_3.png
The natural rise in global T has remained well within past parameters, when human CO2 emissions were not a factor:comment image
The natural rise in global T is seen in other databases.
The alarmist crowd refuses to use whole degrees, even though that is much more honest than using the magnifying effect of tenth and hundredth of a degree axes. Those tiny divisions are negated by the error bars.
This is what all the wild-eyed hand waving has been about:
http://4.bp.blogspot.com/-lPGChYUUeuc/VLhzJqwRhtI/AAAAAAAAAS4/ehDtihKNKIw/s1600/GISTemp%2BKelvin%2B01.png
The same alarmist clique is incapable of identifying any “fingerprint of AGW” in current temperatures, since there is nothing unprecedented or unusual compared with past temperature records:
http://jonova.s3.amazonaws.com/graphs/hadley/Hadley-global-temps-1850-2010-web.jpg
The entire “carbon” scare is based on assertions that CO2 will cause runaway global warming. But that crowd has NO verifiable, testable measurments showing any “fingerprint of AGW”. All they have ever had are their opinions, nothing more. AGW has never been measured. It is simply too small. Therefore, the global warming scare is a non-problem.
I’m still waiting for one of them to man up and acknowledge that there is no evidence of any global harm or damage from the rise in CO2. Where is the problem? Since they can’t produce any examples of global harm, then CO2 must be considered “harmless”.
Skeptics of the “dangerous AGW” scare won the scientific debate a long time ago. Every alarmist argument has been demolished, based on the fact that they cannot produce any credible measurements quantifying of their assertions. So now it’s all politics, all the time.The alarmist propagandists can’t produce any science-based measurements of what they claim is happening. They expect the public to trust them. But there is no basis for trust. From the President on down they’re lying to the public, like Elmer Gantry claiming he can make it rain.

John Finn

Some of your arguments are a tad embarrassing to serious sceptics. The invitation to post the mean global temperature on the Kelvin scale by a previous commenter was a wind-up. You might note that if you plot the temperature back 20,000 years any trend would be barely discernible – yet this period includes the Last Glacial Maximum when mile high ice sheets spread across much of the northern hemisphere.
It’s good to now that you can make the case that any future glacial advance on that scale should be of no concern to us.
Also: We do know what caused the Holocene climate Optimum – it was orbital forcing
There are several other points but I can’t be bothered.

John Finn,
First off, you are no skeptic, and never have been. Like the other True Believers in ‘dangerous AGW’, you don’t have a skeptical bone in your body. Your belief makes you certain, and that’s enough for you.
Next, the best response you can come up with is that the chart I posted is in Kelvin. Of course, you disregarded the chart in ºF, which shows the same exact thing. You’re just looking to argue. But all you have is your baseless opinion, while I post verifiable facts.
But hey, if the Kelvin chart bothers you so much, here’s a NASA/GISS chart in ºF:comment image
The rest of your opinions are similar. They’re just hit ‘n’ run nonsense, such as your certainty that unlike many other scientists, you know the cause of the HCO. Since you’re so smart (*snicker*), maybe you’d like to give us your very certain opinion about the causes of each of the other Holocene warming events… oh, I almost forgot: you “can’t be bothered”.
The real fact is that like alarmists in general, you have much certainty, but zero honest skepticism. So it provides lots of amusement for skeptics, who note that not one scary prediction you’ve ever made has come true — even while you post about things you insist you “know” — but which are still being debated by thousands of scientists.
Enjoy your certainty, John. You’ve probably never heard the quote that fools are certain, while wise men are never sure…

John Finn

First off, you are no skeptic, and never have been. Like the other True Believers in ‘dangerous AGW’, you don’t have a skeptical bone in your body. Your belief makes you certain, and that’s enough for you.

Funny that – since Roy Spencer, Richard Lindzen, Jack Barrett and most others who have scientific credibility AND are sceptical of Catastrophic AGW tend to accept that some warming is inevitable from increased CO2 concentrations. There is a broad agreement that sensitivity is around 1.2 degrees per 2xCO2. Steve McIntyre has considered the claims that CO2 is irrelevant and dismissed them in 2008 when he analysed emission spectra in one of his blog posts.
I am, therefore, wondering who it is that agrees with you. Which particular amateur blogger with his or her own crackpot theory has your support or do you simply stick to posting outdated and irrelevant nonsense to support your case?

John Finn,
I’m sorry….dbstealey has never said, as far as I know or have paid attention, that CO2 does not or will not have an effect on global temperatures. His point is always that WE DON’T and CANNOT possibly claim to KNOW that it has or will or does because we at this point cannot subtract the influence of all of the natural factors involved in the climate to prove 1) that it IS influencing temp rises AND 2) how much of any influence we might find can we directly and unquestionably attribute to HUMAN emissions of CO2.
So, it’s really “funny” that you bring up “Roy Spencer, Richard Lindzen, Jack Barrett and most others who have scientific credibility AND are sceptical of Catastrophic AGW tend to accept that some warming is inevitable from increased CO2 concentrations” because as far as I can tell, dbstealey has never claimed to disagree with them on that.
He says above-“The entire “carbon” scare is based on assertions that CO2 will cause runaway global warming. But that crowd has NO verifiable, testable measurments showing any “fingerprint of AGW”. All they have ever had are their opinions, nothing more. AGW has never been measured. It is simply too small. Therefore, the global warming scare is a non-problem.”
Notice, he does NOT say “AGW” does not exist. He says “it’s simply too small to be measured”. He is skeptical of “runaway global warming” just like Spencer and Lindzen and Barrett etc. And like most of us, he’s sick and tired of people screaming that global warming is a BAD thing when there is no absolute proof that it has done ANYTHING except make things more livable for humanity.
I would logically assume that dbstealey wouldn’t be too thrilled with a “future glacial advance” on the scale of the LGM at all. No one would be. So your snippy remark was just petty. But I think he’d tell you that humans probably cannot affect or change that either.

Aphan says:
dbstealey has never said, as far as I know or have paid attention, that CO2 does not or will not have an effect on global temperatures.
As usual, a very logical and accurate response that cuts to the chase. People like John Finn just love to set up their strawman arguments, then demolish them as if they were my arguments.
They aren’t. And like Willis Eschenbach constantly asks, please quote my words. You know, like I quote yours, John.
But if you quoted me verbatim, your strawman arguments would go up in smoke.
So you misrepresent in order to try and get a leg up. But really, John, you’re just not clever enough to do that.

It still looks like a El Niño pattern superimposed over a slight warming trend–with El Niño not explained by the IPCC models.

Dodgy Geezer

It’s important to note that it’s not enough for the warmists to detect ‘statistically significant’ warming.
We are coming out of an Ice Age. So there ought to be a continual base warming signal in the data. For the CO2 AGW hypothesis to hold, there should be an EXTRA warming signal due to the CO2, over and above the natural base warming.
I’m not too sure what this base warming rate is, but it certainly exists, and should be subtracted from the observed data before any assertion is made about climate change. If we have had essentially no variation in 23 years, that indicates that there has been a FALL in temperatures in real terms…

kim

The Earth’s been cooling since the Holocene Optimum around 5,000 years ago. Our Modern Warm Period is one of the warming excursions from this long trend and we’ve not reached the peak of the Medieval, the Roman, or the Minoan Optima.
You had better hope that the recovery from the Little Ice Age, the coldest depths of the Holocene, has been predominantly natural warming, for if man has done the heavy lifting of warming, we can’t keep it up much longer.
The higher the sensitivity of temp to CO2, the colder we would now be without AnthroCO2. However much man has contributed to recent warming, by just that much would we now be colder. At the range of sensitivities that alarmists would like to scare us with, those above 2-3 degrees C. we have already prevented cooling.
So far, as a species, our perspective on this whole climate thing has been extremely blinkered.
=====================

Reminds me of “How to Lie with Statistics” from my college classes. Different beginning and ending points always yield different outcomes used as they are in climate modeling. Which temperature records to use? I guess technically, one should include the entire instrumental record, since any other choice could be called cherry-picking. Even then, one could argue that if we had more data, the answer would be different. It certainly could be. All any of this proves is statistics can give you whatever you want within reason. What statistics cannot do is predict the future with any certainty nor produce models that predict with certainty. We just don’t know what the climate will do. It’s all smoke and mirrors.

Michael Maddocks

I still think the discrepancy between predictions and data is more important. The actual trend only becomes important if it measures significant cooling, in which the whole CAGW hype-train will be derailed. Assuming the public hasn’t been dumbed down enough to accept a 1984-esque rewriting of history.

skeohane

Hansen hired Winston Smith Years ago.

RD

@davidmhoffer – thanks for comments regarding the pause.

Richard M

I would prefer the trend be computed with ENSO corrections. This would eliminate the problems of when an El Nino or La Nina occurs which would eliminate the silly comments like Stokes made above. Since this work has been done, why not use it? Just extend the work done in this diagram to the current date and compute the trend.
http://www.nature.com/ngeo/journal/v7/n3/images/ngeo2098-f1.jpg

Bindidon

Looks good for me (I try to keep aequidistant from warmists and ‘coolists’). Plot (c) probably would give the traditional 1.2 °C / century for RSS 3.3…
But care! Skeptics do not at all appreciate these ENSO and volcalno removals, as you can see e.g. here:
https://bobtisdale.wordpress.com/2012/01/14/revised-post-on-foster-and-rahmstorf-2011/

Richard M April 7, 2016 at 7:04 am:
You have no idea what you do when you say “ENSO remove.” ENSO is an integral part of every temperature graph there is, from start to finish, and can’t be removed. It can be suppressed on paper only but when you do that you destroy information and create errors. Look at any temperature curve and you see a saw tooth pattern. All these teeth are El Nino peaks and the valleys in between are La Ninas. There is an equal number of them because they are created in pairs. Together they cover the entire temperature curve with no even segments anywhere. None of them are caused by volcanic cooling because volcanic cooling does not exist in the troposphere. Those so-called “volcanic cooling valleys” thrust upon us are just misidentified La Ninas. An example is the La Nina of 1992/93 that follows Pinatubo eruption which was assigned to it because by chance it was in a place where volcanic cooling was expected. The reverse can also happen as with El Chichon where the eruption is followed by an El Nino peak, not by any La Nina valley that can be used as phony volcanic cooling.

Remove ENSO variations?
Which ones?
The ones whose effects you can see or the ones you surmise are there?
ENSO is cyclic, but is currently unpredictable.
Or are you suggesting that all outliers be discarded?

Ohhhhh! Now take out all of these eruptions too!
https://en.wikipedia.org/wiki/List_of_large_volcanic_eruptions_of_the_19th_century
Let’s see exactly how warm the globe would be today without ALL of the eruptions since 1800!!!

AJB

“Since this work has been done, why not use it?”
Because it’s complete and utter fabricated cobblers, not to put too fine a point on it.

Aw man..AJB…you ruin ALL the fun! 🙂

A better illustration of the current situation is shown below.comment image
See figs 1, 3,4 and 8 at
http://climatesense-norpag.blogspot.com/2016/03/the-imminent-collapse-of-cagw-delusion.html
The millennial temperature peak is seen at about 2003.This corresponds to the solar activity peak at about 1991 Fig 8 .The previous temperature peak was at 990 +/- Fig 4
The El Nino peak is temporary aberration from the cooling trend (blue line) which will continue with various ups and downs until about 2650 +/- . Fig 4

Mark

Has anyone noticed the desperation of some to try isolate selecting a pause in warming from data, from AGW theory.
They will attack the claimed cherry picking while forgetting the very theory they are pushing makes 10 year or more pauses in warming makes those statistical pauses relevant.
CO2 in the atmosphere is causing warming of a greater order than natural variability they say, so when CO2 goes up and up, and there is a longer than 10 year pause in increasing temperatures, they attempt to isolate the statistics from the theory, and claim cherry picking.
The start data for ice levels is the biggest cherry pick in the field. But it’s apparently OK to do that

Mark

Plus temperature is following Hansen’s original C scenario (draconian cuts).
Now he’s back again claiming waterworld is on the horizon. The guy has a few screws loose

It appears from your comment that you don’t understand Scenario C or other aspects of Hansen’s paper.
You’re in good company, Steve mcIntyre made a similar error until I corrected him several years ago.
[Link, please? -mod]

John C

Why does everyone put so much into charts , graphs, trend lines and on and on. I’m no scientist but I am a business owner for 40 years. Critical thinking skills and logic would seem appropriate. Nick wants me to believe that the 33 or so molecules in 85000 parts of our atmosphere is causing runaway heating. As silly as that seems to me, he then wants me to believe that tha ONE molecule that is human caused is the main driver of this run away climate change. Since we obviously can’t anytime soon eliminate all of this one human molecule added, what percentage does anyone think we can eliminate? And then I’m expected to believe this tiny percent change at a tremendous cost in money and possibly lives in developing countries, is worth it. Hubris falls short in defining this belief. Why don’t we invest in better preparing for what nature throws at us whether it’s hot or cold and stop goofing around trying to control it with some trumped up idea that co2 is poison.

Joel Snider

Also that the Pause ‘insofar as it existed’ – upwards of two decades – is not significant if it doesn’t precede cooling, as opposed to demonstrating the lack of warming in the face of increasing C02 predicted by models . Not to mention the absence all the supposed consequences that are supposed to result.

Djozar

Concur; once the EPA decided this low level gas was a pollutant, I knew Big Brother was here. If you want to claim its anthropogenic why not look at water vapor, SOx, NOx, etc?

Harry Passfield

John C: The way I had it put to me goes something like this: You have a v large swimming pool into which you have dumped 99,962 white ping-pong balls (which keep the water temperature fairly stable and very comfortable). You then add 38 blue balls which, as well as keeping the water healthy, have the (claimed) ability to raise the temperature by a very, very small amount – that cannot be verified with a thermometer (or by dipping a toe in the water!). You then add two more blue balls, which, although being man-made have a similar tendency to warming (it is believed). Now, do you think it would be safe to enter the water, or will it be too hot?

John C.

That’s my point. Illogical at best. Outright propaganda that too many believe. I suppose the mass of the co2 molecule can factor in but even that comparison falls short of making me fall for the tiny amount my tailpipe emits and the electricity for my house and business is doing anything. Sorry AGW promoters, I need a bigger problem to stay awake at night worrying about. I truly believe after pollution control greatly improved the air we breath, as it needed to, they needed another problem to attack business and capitalism. So they found co2. A naturally occurring gas that is great for greening of the planet and they called it a controlled substance and poison. And this came from educated pinheads. Sad state we find ourselves in today.

Slipstick

John C,
Why? Because…physics. Increasing the CO2 concentration in the Earth’s atmosphere will cause the temperature of the atmosphere to rise. Anyone who doubts that is simply mistaken, no matter how fervently they believe otherwise. The question is how much rise and what are the consequences. You mentioned costs; what are the costs if the majority of the science community is right about the effect of CO2 at projected concentrations?

Slipstick

Harry Passfield, if your analogy is meant to represent the current conditions in the Earth’s atmosphere, the number of blue balls should be continuously increasing. Also, how small is very, very? The atmospheric temperature change necessary to have a deleterious effect on the climate is many times smaller than that necessary to make a comfortable swimming pool too hot for safety.

Harry Passfield

Slipstick: Is it meant to represent the current conditions, etc? Of course not. It’s a load of balls. Oh, hang on… (And there’s always one.)

John C.

Slipstick: Let’s ignore the FACTS presented to you. Let’s also ignore the sun, water vapor, ocean cycles and all the other things we don’t understand. The agreed FACT is only one molecule in 33 is human caused. This is in 85,000 molecules of air. Let’s assume, and you seem to like assumptions as opposed to facts, that we can magically stop adding that one molecule. What affect do you think it would have? Not to mention that all the efforts the greens and government propose and are doing will have very little reduction overall. Waste of time, resources, and dangerous. Answer my question. Why don’t we put our efforts into coping with climate instead of trying to control it? logic doesn’t seem to apply only emotion and agenda. The agenda is the part I haven’t figured out yet. It’s so crazy there must be something I’m missing.
And if all you have are platitudes and Algore propaganda, don’t waste my time.

Bernard Lodge

Slipstick:
‘Anyone who doubts that is simply mistaken, no matter how fervently they believe otherwise.’
Wow! You seem pretty sure about your opinion. In fact it sounds like you fervently believe it.

Slipstick

A rise in the equilibrium temperature of a gaseous system with an increasing proportion of CO2 exposed to infrared energy is not an opinion, it’s physics, and yes, I fervently believe in physics.

“You have a v large swimming pool”
The swimming pool is a good analogy. Imagine adding 400 ppm of ink. Then you can’t see the bottom. In the IR range, in the air, CO2 is ink. And radiant heat needs a clear view to emerge. Otherwise less efficient modes of heat transfer are used.

catweazle666

Nick Stokes: “The swimming pool is a good analogy. Imagine adding 400 ppm of ink. Then you can’t see the bottom.”
Totally wrong.
Stop making stuff up.

skeohane

Wasn’t it Angstrom’s assistant who showed all the IR to be absorbed by CO2 was already. Adding more CO2 makes no difference unless the sun outputs more IR.

“Totally wrong.”
Your evidence?
According to Beer’s law, the total absorption of light (or IR) by a solute depends on the amount of it in the light path. If you dilute it so that the column is deeper (but same cross-section) the total absorbed is the same.
400 ppmv of ink in a 2.5m deep pool is equivalent to adding a 1mm layer and stirring. And 1 mm of ink is quite opaque.

AJB

“And radiant heat needs a clear view to emerge. Otherwise less efficient modes of heat transfer are used.”
Less efficient? The troposphere is dominated by convection; a water driven heat pump with a staggeringly large throughput running on idle most of the time. Diffusion confusion yet again. You cannot use fag packet fizzicz to calculate net radiative transfer in isolation, much less attempt it in only two dimensions assuming magic partial mirrors mounted on some toy story spherical Rubik’s cube.
All forms of energy diffusion are integrated all of the time in all directions. The usual cartoon and confusion of colour temperature with actual energy transfer are sheer stupidity and the very root of this entire nonsense.

“Less efficient?”
Yes. Upward infrared through a transparent atmosphere is efficient enough to emit all absorbed solar radiation at the snowball Earth temperature of 255K. The fact that we are at about 288K shows that all operating mechanisms including convection are far less efficient, and require a much larger driving temperature difference.

John Finn

Nick , Harry et al
You don’t need ink or swimming pools.
Milk is 87% water. If you add a teaspoon of milk to a glass of clear water (about half a litre) you will not be able to see the bottom of the glass. Much of the visible light will have been reflected – yet the reflective constituents (fats and casein) only represent about 0.06% (600 ppm) of the liquid.
Try it.

John C. April 7, 2016 at 10:36 am
Slipstick: Let’s ignore the FACTS presented to you. Let’s also ignore the sun, water vapor, ocean cycles and all the other things we don’t understand. The agreed FACT is only one molecule in 33 is human caused.

That is not a FACT of any sort.

Gloateus Maximus

Nick,
Your ink analogy is faulty.
In the first place, the “pool” with a million balls is already inky from 30,000 H2O molecules. Second, we’re not adding 400 extra CO2 molecules, but only 115. How much inkier will 115 extra molecules out of 30,285 make the pool? To say nothing of the other GHGs at even lower concentrations.
One extra CO2 molecule per 10,000 might have a measurable effect in parts of the atmosphere with low H2O concentrations, but in most places H2O totally swamps the radiative effect of CO2, above the level required for life. Adding more CO2 generally has a negligible effect, as in fact has been observed over the past century or more of rising CO2.
The effect is so slight that Callendar, proponent of (beneficial) man-made GW during the 1930s, considered his hypothesis shown false by the 1960s, more frigid despite much more CO2 than in the ’30s. And he was right.
Under yet more CO2, the ’70s were still cold. Then the PDO flipped and the planet warmed slightly from the late ’70s to late ’90s. Since then, GASTA has stayed flat to declined. The long-awaited super El Nino has finally occurred, producing a probably temporary ever so slight uptrend since the super El Nino of 1997/8, but in all likelihood we’re headed back down in coming decades, thanks to the PDO and AMO oceanic oscillations.

Nick Stokes says:
400 ppmv of ink in a 2.5m deep pool is equivalent to adding a 1mm layer and stirring. And 1 mm of ink is quite opaque.
Fuzzy thinking. You’re just making things up and asserting them as fact. The depth of the pool is not your argument, and it also disregards the area of the pool. What matters is 400 ppm.
When I was a kid my aunt used to add what was called “bluing” to her clothes washer. It was supposed to make ‘whites whiter’.
The bluing was in a bottle, and it was as dense and dark as any India ink. She would pour a few tablespoons into the water, and my cousins and I would watch amazed as the bluing disappeared. It did not visibly change the transparency of the water at all.
Every argument made about the dangers of more CO2 amounts to evidence-free hand waving. Those arguments have remained unchanged for decades. But since hand waving is all you’ve got, that’s what you use.
If you were an honest skeptic, you would have more options.

Gloateus Maximus April 8, 2016 at 9:56 am
Nick,
Your ink analogy is faulty.
In the first place, the “pool” with a million balls is already inky from 30,000 H2O molecules. Second, we’re not adding 400 extra CO2 molecules, but only 115. How much inkier will 115 extra molecules out of 30,285 make the pool? To say nothing of the other GHGs at even lower concentrations.

Wrong, in the 15 micron band H2O does not significantly absorb compared with CO2:
http://i302.photobucket.com/albums/nn107/Sprintstar400/H2OCO2.gif

dbstealey April 8, 2016 at 10:38 am
Nick Stokes says:
400 ppmv of ink in a 2.5m deep pool is equivalent to adding a 1mm layer and stirring. And 1 mm of ink is quite opaque.
Fuzzy thinking. You’re just making things up and asserting them as fact. The depth of the pool has nothing to do with your argument, which also disregards the area of the pool. The only thing that matters is 400 ppm.

Yes your thinking is indeed fuzzy stealey. The depth of the pool is indeed critical, look up Beer’s law, absorption is proportional to concentration X path length.
When I was a kid my aunt used to add what was called “bluing” to her clothes washer. It was supposed to make ‘whites whiter’.
The bluing was in a bottle, and it was as dense and dark as any India ink. She would pour a few tablespoons into the water, and my cousins and I would watch amazed as the bluing disappeared. It did not visibly change the transparency of the water at all.

Perhaps you should have asked your aunt how laundry blue works. The dye, say Prussian Blue, is absorbed onto the clothes and therefore removed from solution, what you’ve done is to dye the clothes blue (v slightly). The slight blue color added to the clothes counteracts the dingy yellow color of the old clothes and makes them appear white.

The depth of the pool has nothing to do with your argument, which also disregards the area of the pool.

Nick Stokes is right here. Depth has everything to do with it and area has nothing to do with it.
As far as depth is concerned, you need to take the ratios of the depths and the 400 ppm to million ppm. 400/1 000 000 has the same ratio as 1 mm/2500 mm. So if the pool were 25 m or 25 000 mm deep, you would need 10 times as much ink, or 10 mm of ink.
As for area, that does not matter. If you had a 2.5 m long straw or a 2.5 m pool the size of a city, it would still take enough ink to cover the top with 1 mm.
(On the best science site, we cannot let slip ups go unchallenged. ☺ Agreed?)

Werner,
I’ll agree that the 400 ppm is the relevant metric. Neither depth nor area have anything to do with Nick’s claim of making the water opaque, because that wasn’t his argument.
It doesn’t matter if the pool is 2.5 cm deep, or 2.5 metres, or 2.5 miles deep. Or wide. The 400 ppm (or as you say, the one molecude in 10,000) is what matters.
Also, from personal observation I don’t accept Nick’s belief that one molecule of ink in 10,000 of water will make the water opaque.
(On the best science site, we cannot let slip ups go unchallenged. Agreed? ☺)

Also, from personal observation I don’t accept Nick’s belief that one molecule of ink in 10,000 of water will make the water opaque.

Actually, Nick said that 4 molecules in 10,000 will make it opaque. Now turning to CO2, what was not directly addressed was the fact that nature already had 2.8 molecules of CO2 in 10,000 in 1750. Does man’s additional 1.2 molecules of CO2 in 10,000 make a further difference? And most would agree that due to the logarithmic affect, this further addition by man makes very little difference to temperature.

AJB

“Upward infrared through a transparent atmosphere is efficient enough to emit all absorbed solar radiation at the snowball Earth temperature of 255K. The fact that we are at about 288K shows that all operating mechanisms including convection are far less efficient, and require a much larger driving temperature difference.”
Absolute rubbish, we are not talking about an idealised transparent atmosphere and you’re still thinking in terms of isolated transfer mechanisms. Take a look at the temperature gradient through the entire atmosphere. Aggregate diffusion is heavily skewed by convection, which is in large part why earth has an enormous stratospheric temperature inversion (and Venus does not). The tropospheric lapse rate is linear due entirely to convection within reducing density. The assumption from colour temperatures that CO2’s effective radiative altitude for 2-dimensional SB calculation purposes is below the tropopause is not only wrong, it’s ludicrous. It confuses energy flux with temperature. Mixed gases with condensing components do not behave like black bodies or even grey bodies. Point a pyrometer anywhere you like, it cannot tell you anything about net aggregate energy transfer. Doing the physics properly requires integration of all forms of transfer in three dimensions at the micro scale, particularly the latent heat component inherent to cloud evolution. Back of a fag packet shell games in isolation are not even wrong.

A recent study showed an increase in atmospheric CO2 of 22 ppm during the period 2000-2010 produced an increase in radiative forcing of 0.2 W/m^2. That’s close to the predicted value, and evidence that added CO2 produces warming.
http://www.iflscience.com/environment/scientists-find-direct-evidence-atmospheric-co2-heats-earth-s-crust
http://www.nature.com/nature/journal/vaop/ncurrent/full/nature14240.html

A recent study showed an increase in atmospheric CO2 of 22 ppm during the period 2000-2010 produced an increase in radiative forcing of 0.2 W/m^2.

The first link does not work and they want money to see the second. However the dates could not be worse to prove a point! 2000 had a La Nina and 2010 had an El Nino.

Werner, both links work on my machine. The bottom link is to an abstract where you can find the information I provided. Finally, the study shows an increase in radiative forcing and does not measure surface temperature.

“Why don’t we invest in better preparing for what nature throws at us whether it’s hot or cold and stop goofing around trying to control it with some trumped up idea that co2 is poison.”
Such a relevant, intelligent, logical question. And you know what….I cannot think of ONE logical, intelligent, relevant reason except one (but the idea itself is insane mind you)…if you wanted to control the world’s money for any reason…play with it, make buildings out of it, redistribute it, undermine your enemies….economy, people…you’d have to attack at the very root of prosperity. Which for the Western world has been the increasing ability to move freely about, and live in relative comfort, relatively inexpensively. Fossil fuels. Movers and shakers can only move and shake because of them.
Having so many powerful industrial giants sucks away vast amounts of power (money) from ever being focused one centralized place. (one world government) You gotta stop the diffusion….close down access from a lot to a few. But it would be easy to see that such an agenda was mindbogglingly dangerous and that those who believe in it were absolutely bonkers…that is… if they went after powerful and necessary business men and women and investors outright. So what would be a very clever way of bringing them down without looking like bat crap crazy, jealous, power hungry hyenas? Attack using a “scientific hypothesis” that all those evil, nasty emissions that can only be attributed to one source—fossil fuels….break down their profit margins with taxes, ruin their reputations with accusations and make the whole world believe that they are KILLING THE PLANET, and they couldn’t give a rat’s behind about it either.
Invite all the little, pathetic, powerless people of the world to become superheros for Earth….first, tell them how marginalized they really are….teach them that there’s 97% of them and only 3% of the big, bad enemy….stoke class warfare…and movements like Occupy Everything….pretend you are supporting their progress while squeezing them in subtle ways to make their suffering even more acute. You gotta keep them down so you can point out how down they are all the time! Make them angry. And scared. And feed them propaganda 24/7 from every angle. And hope with all you have in you that the climate doesn’t do what it most likely and naturally will….reach a certain point and start to cool off……BEFORE your plan is successful. After all, it would look stupid if the world starts to cool off like it always has and you haven’t implemented any of your “world saving strategies” before it does! In fact….the closer it gets to that actually happening…the more shrill and panicked and terrifying you might have to become in order to push it all past the tipping point.
And the best part is….between the advances of technology and the age old fact that some people are so stupid, so gullible, soooooo incredibly weak in the face of even basic suggestions… you wouldn’t even have to form an old fashioned physical conspiracy! No meetings….no secret handshakes….no overhead..no heads on spikes. Nothing. Just hit the public over and over and over again in their emotional soft spots….home….family….hopes….dreams….their religion….their futures….their darkest fears…..death…..destruction….insecurity….loss.
You don’t need to crest a hill with an overwhelmingly large army arrayed in shining battle gear anymore to bring your foes to their knees! Silly! Just make them feel like something that bad and that awful is coming for them if they don’t do something.
(Hint…if you spend money on preparations and adaptations for future natural disasters…they’ll realize doing so is much easier, productive, and visibly reassuring. It would give people hope and comfort and peace….and you can’t do a d@mned thing to push an agenda on hopeful, comfortable, peaceful people!)

Yes thanks

A C Osborn

It will be interesting to see how the Satellites and NOAA/GIIS deal with the sudden drop in Sea Temps.
Will they ignore it, adjust it out or what?
See
http://notrickszone.com/2016/04/06/global-sea-surface-temperatures-have-fallen-sharply-cooled-surprisingly-negative-global-temperature-anomaly-by-end-of-2016/#sthash.vdno3ipL.dpbs

Joel Snider

Or ignore it until they can adjust it, or simply claim that it all proves their point anyway.

Janice Moore

Re: “effect”
The phrase “CO2 has an effect” describes something very small (if indeed it exists at all), but it has great potential to mislead. This hyper-technical phrase evokes in the average reader’s mind the false implication that CO2 has a controlling effect. In tort law, when a potential, very small, cause is OVERWHELMED by another controlling cause, the controlling cause is called a supervening causation. Here, the effect of natural drivers is the supervening cause of climate change.
To mention human CO2 emissions is unhelpful at best, damaging to the truth about causation at worst.
If an effect is obliterated by another cause, here natural climate drivers, it is accurate and wise (we are in a WAR for science realism where word-twisting by the likes of St0kes can easily fool the uninformed) to leave the conjecture about human CO2 emissions’ potential “effect” completely aside.
************************************************************
Remember Major Burns on M.A.S.H.? If he were made a general, he would lose the war, getting bogged down in bickering over hyper-technical nit-picking: winning wars takes strategy as well as technical expertise. Wisdom must guide knowledge.
*****************************************************
The STOP in warming, so far as meaningful measurement goes, IS. It has no “end” at this point.
*****************************************
ANOTHER GREAT ARTICLE, MR. BROZEK — THANK YOU!

ANOTHER GREAT ARTICLE, MR. BROZEK — THANK YOU!

You are welcome! But do not ignore the giants who directly or indirectly contributed, namely David Hoffer and Nick Stokes.

TonyG

Maybe I’ve missed some posts but the last I read the ‘pause’ was somewhere around 18 years. What caused the sudden jump to 23? What did I miss?

Janice Moore

Looks like you missed THIS post, Mr. G.:

In the above graphic, the green line is the slope since May, 1993 …

2016 -1993 = 23

Maybe I’ve missed some posts but the last I read the ‘pause’ was somewhere around 18 years. What caused the sudden jump to 23? What did I miss?

We are talking about two different things. See an earlier post of mine that clearly describes the differences here:
http://wattsupwiththat.com/2014/12/02/on-the-difference-between-lord-moncktons-18-years-for-rss-and-dr-mckitricks-26-years-now-includes-october-data/
In short, the 18 years had a slight negative slope. The present 23 years has a positive slope, but it is not statistically significant enough for climate scientists to be sure we really do have warming over the 23 years.
As an analogy, suppose we have a political poll that says one candidate has 40% and the other has 38%. But then they say the margin of error is 3% 19 times out of 20. So with the margin of error considered, we really cannot be sure the one candidate is favored by a majority.

TonyG

Ok that makes sense now. I knew there was something fairly obvious I was missing. Thanks for clarifying.

“it is not statistically significant enough for climate scientists to be sure”
But the 23 trend observed was actually 0.92°C/Century. And while with different weather it just might have been as low as zero, it might equally have been as high as 1.84 °C/cen. This does not really have the attributes of a pause.

This does not really have the attributes of a pause.

True, but Phil Jones and the rest of the climate science community, rightly or wrongly, use that yardstick for certain conclusions. Is that not correct?

“use that yardstick for certain conclusions”
People use the existence of statistical significance (SS, at 95%) as a yardstick. They don’t use the non-existence of SS. Lack of SS just means you don’t have enough data to be sure.
The trend of RSS since Aug 2010 had a lower CI of -0.852. Not SS (relative to 0). But the trend itself was 5.29°C/Cen. That isn’t a pause. Actually, there is an interesting paradox here. To Jan, from 8/2010, the lower CI was -0.751 – higher than Mar. But we’ve had two very hot months, and we’re less certain of positive trend? The reason is that the sudden rise increased the estimate of variability more than the rise in trend.

The reason is that the sudden rise increased the estimate of variability more than the rise in trend.

Thank you! That explains a puzzle I had with respect to Hadcrut4. I wrote:
For Hadcrut4.4: Since October 2001: Cl from -0.016 to 1.812 (Goes to January)
But with a high February anomaly, it got extended back to August as follows.
Temperature Anomaly trend
Aug 2001 to Feb 2016 
Rate: 1.007°C/Century;
CI from -0.003 to 2.018;
t-statistic 1.954;
Temp range 0.443°C to 0.589°C

TonyL

The Pause numbers for 18 years (or 18 years, X months) is based on the definition of a regression line with a slope less than 0.0
The 23 year numbers use the definition of a regression line with a slope not statistically significantly different from 0.0
The issue of statistical significance is very important, as it takes into account, in some fashion, the natural variability of the data.

The error bars on RSS are actually worse than that.
They are so big that its really pointless to compare Satellite temperatures with land temperatures
or to compare satellite temperatures with GCMs.
As to the pause..
when can we see the goofy approach of starting with today and going back in time to measure a pause?

when can we see the goofy approach of starting with today and going back in time to measure a pause?

That could take quite a while. See Nick Stokes’ excellent comment here on this point:
http://wattsupwiththat.com/2016/04/04/march-was-3rd-warmest-month-in-satellite-record/#comment-2182271

J Martin

When the La Nina takes effect of course. Then you will be able to read on wuwt that the pause has lengthened.

John C.

Slipstick, you just did the propaganda response. Make a statement without any facts. You didn’t explain how this one human caused molecule in 85000 was going to have the effect you believe. And what percent reduction do you think will stop what you believe? You might need to figure out an economical method to remove as much co2 as possible to save the planet. Of coarse this will needlessly kill off many plants and cause much human and animal deaths. But you may be OK with that.

Slipstick

John C,
Before I can respond, to what does 85000 refer?

John C.

Molecules of what we call air in a specific volume. 400 parts per 1,000,000 is a FACT we all agree on. Do the math with some rounding of course. Of the approximately 33 molecules in 85,000 parts, only one is attributed to humans. There is some difference in mass but not enough to really make a difference in basic physics. The re radiation of absorbed heat is well known. I am only concerned with quantity because I’m applying critical thinking. I can’t see how what is claimed is possible. If co2 was at a much higher level, and I mean much higher, you may have a point. The fact that so little of co2 is human caused and our ability to reduce it and maintain a livable planet, defies logic without some very big changes.

John C.,
You’re misunderstanding two things. First, a single CO2 molecule can interact repeatedly with photons, redistributing their energy without ever using up the CO2. Second, concentration is only part of the equation, the other is the shear scale of the atmospheric air column. Here’s an analogy I wrote a long time ago to try and illustrate:
Assume a square glass jar that is 10 grains on a side. That’s 100 grains in a single layer. Now imagine the jar is 100 grains tall. 10,000 grains in all. Make 99,996 white and just 4 red, same ratio as your example above. Suppose the jar is about 10 cm tall. Now, instantly make all the white grains invisible. What would you see?
Well, you’d see a 10 cm tall jar that is mostly empty, with a fleck of red here and there. You could easily draw a vertical line from the bottom of the jar to the top without hitting any of those red flecks. In fact, you could draw a lot of them.
Now, stack thousands of those jars on top of each other in a tower 14 kilometers high. You’ll need a stack of 140 thousand jars. Now try drawing a line from bottom to top without hitting a red grain. You can’t. In fact, not only that, you can’t even do it without hitting thousands of red grains.
I’m a confirmed skeptic, but radiative physics is a bit more complex than simply drawing conclusions from concentration ratios.

Harry Passfield

John C: I hadn’t refreshed when I posted below. I figured it out …3%? Guess I was close….

Slipstick

Only one is attributed to humans? Using your scale, the current concentration is 34/85000 and since ~1960 the concentration has risen from ~26 to 34. Are you attributing that increase to something other than human activity? If so, what?

Harry Passfield

Perhaps it’s the ratio of CO2 that is man-made? (As opposed to natural CO2 in the atmosphere) Just guessing here.

Perhaps it’s the ratio of CO2 that is man-made? (As opposed to natural CO2 in the atmosphere) Just guessing here.

There have been many posts on this and I am certainly not going to get into it in this post. As well, there is strong disagreement. Here is my understanding:
Out of 100 CO2 molecules that enter the atmosphere each year, 3 are due to man and 97 due to natural sources. However these 3 due to man added up over the last 250 years so that of the present 400 parts per million, 120 parts per million is the cumulative total due to man. So we caused a 40% increase in CO2. But so what? The important thing is that this has not contributed to CATASTROPHIC warming, nor will it in the future.

Harry Passfield

davidmhoffer: Thank you for the analogy with the glass jars (14km high).
I’m a confirmed skeptic, too (does it show?), but our warmist politicians – who make the carbon laws) are themselves “drawing conclusions from concentration ratios”.
I guess the thing we need to take notice of is that little bit in “ppmbv” (I’m assuming your glass jars are the “bv” bit).

Javier

The world has been warming for 400 years, almost all of it due to natural variability [causes?]. It will continue to warm (I expect) and most of the warming will be due to natural variability

I take issue with this phrase only.
The truth is we are not sure what caused the LIA and therefore we don’t know what is causing the post-LIA warming. The most popular theory is that LIA was caused by low solar activity helped by unusual volcanic activity, specially at the beginning (13th century) and at the end (1815).
We do know that on a millennial scale the planet is cooling due to lower obliquity (axial tilt) and low summer insolation in the Northern Hemisphere due to unfavorable precession.
The most reasonable explanation is that LIA was an anomalous cold period caused by unusual conditions and the planet has naturally and gradually warmed to the level that corresponds to its present orbital configuration and perhaps a bit more due to a rebound effect and helped by the increase in GHGs.
The most reasonable expectation is that after this warming period the world should resume its progressive cooling. The more we warm, the stronger the opposing cooling forces get. Paleovclimatology shows that GHGs are not a strong driver of temperatures by themselves, since the second half of the Holocene showed progressive cooling despite increasing GHGs concentration.
Most people assume that in the absence of warming forcings global average temperatures should remain more or less levelled, and that in the presence of indefinitely increasing levels of CO2 in the atmosphere temperatures should indefinitely increase. Both assumptions are wrong. Global temperatures work like a roller coaster: Once the highest point was reached in the present interglacial the only way is down, even if the fall is made of ups and downs. As high levels of CO2 did not prevent the planet from entering glacial conditions previously, and they did not prevent the cooling in the 1945-1975 period, we should not expect them from preventing a global cooling in the near future.
In conclusion you should not expect that it will continue to warm. It might continue to warm for some more time or not, but a peak warmth should be reached at some point and then global cooling should resume. Let’s hope that peak warmth was not 2015. I hope we continue getting more record warm years in the future, because it beats the alternative.

Gloateus Maximus

Earth has been in a long-term cooling trend for 3000 years, ie since the end of the Minoan Warm Period. The East Antarctic Ice Sheet quit retreating at that time, for instance.
Peak warmth of the Minoan WP was a little less than the Holocene Optimum, but lasted less time. The peaks of the Roman, Medieval and so far Modern WPs have each been lower than for the preceding WP. The trend is down and not our friend.
We may however be in another of the super interglacials which, based upon the orbital eccentricity cycle, occur at roughly 400K year intervals, in which case “catastrophic” warming could occur naturally over the next 20,000 years or so, ie partial melting of the Southern Dome of the Greenland Ice Sheet.

Javier

Earth has been in a long-term cooling trend for 5200 years, when the Neoglacial subperiod of the Holocene started. See Thompson, Lonnie G., et al. “Abrupt tropical climate change: Past and present.” Proceedings of the National Academy of Sciences 103.28 (2006): 10536-10543.
There is no super interglacials which occur at roughly 400K year intervals. MIS19 took place 800K years ago and was about 12,000 years long, slightly more than the Holocene so far. The astronomical signature of MIS19 is almost identical to the Holocene. The closest of all interglacials for the past million years.

Gloateus Maximus

Yes, you could argue that the cooling trend started at the end of the HCO c. 5 Ka, but the Minoan was as warm, briefly, as the HCO.
Dunno what data you rely upon, but the fact is that the Southern Dome of the GIS melted completely or almost so during the interglacials of c. 800 and 400 Ka. During the warmer and longer than now Eemian, it partially melted.
I can’t link to studies on the length of MISes 19 and 11, since they’re paywalled, but both interglacials were warmer and longer than the Eemian, up to 30,000 years in duration (or more, depending upon how you count).

Javier

the fact is that the Southern Dome of the GIS melted completely or almost so during the interglacials of c. 800 and 400 Ka. During the warmer and longer than now Eemian, it partially melted.

I don’t know what you are talking about. Antarctica has been frozen for millions of years. The Antarctic ice cores do not extend further into the past because the bottom melts away due to geothermal heating, or the bottom layers get messed by horizontal shearing. They are now looking for places that could have older ice, up to 1,5 million years, because they accumulate less ice, not more.
https://www.sciencedaily.com/releases/2013/11/131105081228.htm
MIS 11 is longer than MIS 1 (Holocene), probably because the precession peak (Northern summer insolation) and the obliquity peak are separated by a little less than 10,000 years so the rise in insolation from precession compensates the fall in insolation from obliquity. But in MIS 1 the peaks are coincident, so the next peak in precession is almost 20,000 years apart with plenty of time for obliquity to fall almost to the bottom of its cycle without significant northern summer insolation from precession.
MIS 19 800k years ago was not longer than MIS 1. Nor was it warmer. Probably about the same or slightly cooler judging from deuterium levels.
The following figure is from Pol, K., et al. “New MIS 19 EPICA Dome C high resolution deuterium data: Hints for a problematic preservation of climate variability at sub-millennial scale in the “oldest ice”.” Earth and Planetary Science Letters 298 (2010): 95-103.
http://i1039.photobucket.com/albums/a475/Knownuthing/Figure%209_zpsl52xhrtm.png
MIS 1 is in red and MIS 19 in black. From highest deuterium level to the end of the plateau phase MIS 19 was about 11,000 years, and MIS 1 has already extended to 10,500 years. The accelerated cooling starts at about half way down in the obliquity cycle despite rising summer insolation from precession.
There is no basis to say that the lows in the eccentricity 400k years cycle produce longer interglacials, quite the contrary, the lower the eccentricity, the lower the northern insolation from precession and the lower the forcing to warm during precession peaks. If you check the 65°N summer insolation curves you can quickly see that the highest values are reached during eccentricity highs like the one that took place at MIS 15 200k years ago.
MIS 11 and MIS 1 are really very different astronomically so we should not expect them to behave similarly in terms of temperatures or duration. This figure shows that they can be aligned by precession or by obliquity, but not by both since the peaks are displaced. MIS 1 in red and MIS 11 in black.
http://i1039.photobucket.com/albums/a475/Knownuthing/MIS11Tzedakis_zps4fubj4yy.png

Gloateus Maximus

GIS means Greenland Ice Sheet. I should have spelled it out.
Here is an old link on the melting of the Southern Dome of the GIS:
http://www.livescience.com/7331-ancient-greenland-green.html
Data from Antarctica might differ, but it now appears that the Southern Dome melted twice, once during MIS 19 and again during MIS 11. As I mentioned, it partially melted during MIS 5, ie the Eemian Interglacial.

Javier

I think you got it wrong because you didn’t actually read the paper, Gloateus. The paper you are referring is this one:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2694912/
And about the dating of the material found in the ice cores they say:
All four dating methods suggest that the Dye 3 silty ice and its forest community predate the Last Interglacial (LIG, ~130-116Ka) (Fig 2), which contrasts with the results of recent models suggesting that Dye 3 was ice-free during this period (27, 28). Indeed, all four dating methods give overlapping dates for the silty ice between 450Ka and 800Ka (Fig. 2), exceeding the current record of long-term DNA survival from Siberian permafrost of 300-400Ka (9). However, due to the many assumptions and uncertainties connected with the interpretation of the age estimates (7), we cannot rule out the possibility of a LIG age for the Dye 3 basal ice.
In plain words they think is older than 450k years and younger than 800k years. That rules out MIS 11 that took place 425k years ago, so it should be MIS 13, MIS 15, MIS 17 or MIS 19. However not all is lost, because since the dating methods are so uncertain the biological material could actually be from the Eemian, so any interglacial is a candidate.
There is simply no support for any theory about periodic super interglacials. The Holocene is just like any interglacial, about to end in one or two millennia at most.

Gloateus Maximus

Javier,
I read that paper and subsequent ones which found material from 800 Ka.
The Holocene might well last another 50,000 years. Or not. But the fact is that super interglacials have happened and could again.
Even is the organic material from Greenland is only 400,000 years old, it shows that the Southern Dome melted then, in a very long interglacial.

Gloateus Maximus
J Martin

Why can’t we quantify how much global warming is due to step inputs from El Nino, presumably each La Nina doesn’t fully undo each El Nino. Any remaining trend may then be attributed to third world co2 since northern hemisphere co2 consumption by farm crops exceeds northern hemisphere production if co2.

Of course each La Nina should “fully undo” each El Nino. IN THE LONG TERM.
Otherwise GMT would be set in stone on a rising trend from millenia ago.
ENSO redistributes heat in the climate system (~93% of which resides in the oceans) into the atmosphere. It all comes from the Sun ultimately.
Without an internal source of heat from the ocean bed then PDO/ENSO should cancel.
That it no longer does is due to the GHE of CO2 – up 40% due to anthro emissions.

Gloateus Maximus

CO2 is a tiny portion (400 ppm) of total GHG, although a very distant second to H2O (perhaps 30,000 ppm on a global average basis).
Over the past 150 years, the naturally warming Earth has benefited from having about one more CO2 molecule per 10,000 dry air molecules, ie up from around three to four. Two more such molecules would be even better for plants and other living things.

J Martin

Toneb. Co2 produces downwelling infrared which only penetrates one micrometre so cannot warm the ocean, nor concentrate it’s force in such a localised part of the ocean. There is good satellite imagery which points at an ocean bed contribution. Also there is a 60 year cycle which would seem to rule out co2. There may be proxy evidence for El Nino going back centuries, further reducing the role of co2 in El Nino. If co2 plays a role in El Nino, then what role does it play in La Nina ?

1sky1

The amateurish practice of fitting linear regression lines to woefully short stretches of record and then computing “the confidence intervals” based upon unverified models (e.g. “red noise”) of global temperature variability yield highly arbitrary estimates of physically meaningless “trend.” If an unequivocal indication of actual low-frequency variability is desired, a well-designed low-pass filter with a cutoff near one cycle per decade has to be employed. The results of such filtering are exact.

“a well-designed low-pass filter with a cutoff near one cycle per decade has to be employed. The results of such filtering are exact.”
Linear regression is a filter. It is a Welch filter on the differences. Or a differencing of Welch filtered.

1sky1

The slope of linear regression–which is the metric used in this article–is a very crude BAND-bass filter and does NOT completely display the low-frequency content of the data series, as would a well-designed low-pass filter.

Sparky

J Martin, the oceans could also presumably be assumed to have a nice layer of Water vapour hanging over it, which would swamp any effect CO2 is supposed to have

Apparently Brozek has used a test with low statistical power. The standard regression analysis shows statistical significance at the 99.9999% confidence level (p=1.21e-7). The slope of the regression line is 0.0881°C/decade, with a 95% confidence interval of 0.056-0.120°C/decade. Data source for UAH version 6 beta 5: http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt/tltglhmam_6.0beta5.txt

1sky1

The statistical significance of “standard regression analysis” is predicated upon entirely independent trials of linear relationship (i.e., “white noise” plus trend), instead of serially autocorrelated data, such as found in a geophysical setting.

Brozek’s conclusions do not take into account serial autocorrelation. If that is an issue, the evidence for that needs to be presented here. A previous study demonstrates the major global temperature trends are positive, even when controlled for serial autocorrelation.

jpaullanier,
In other words, satellite measurements are very accurate. That contradicts the alarmist talking point that satellite data is NFG.

“Brozek’s conclusions do not take into account serial autocorrelation.”
They certainly do. Ar(1). There is a discussion here. Autocorrelation has small effect on trend, but greatly increases uncertainty.

Nick, I performed a Durbin-Watson test for autocorrelation on the regression I mentioned. The Durbin-Watson statistic is 2.186. At the 1% level of significance, UL=1.637. So there is no reason to suspect autocorrelation. This is consistent with what was found earlier for major global temperature trends.
“Global temperature series have positive trends that are statistically significant even when controlling for the possibility of strong serial correlation.”
http://journals.ametsoc.org/doi/full/10.1175/1520-0442%282002%29015%3C0117%3ATAOSRT%3E2.0.CO%3B2

JPL,
“So there is no reason to suspect autocorrelation. This is consistent with what was found earlier for major global temperature trends.”
The paper you cite is using annual data. Then there is not much autocorrelation. But if you use monthly data, there is much more, and you must allow for it. Here is a post at Climate Audit where Hu McCulloch correctly criticises Steig et al for not allowing for autocorrelation with Antarctic monthly data. Steig published a corrigendum.
Of course, the extra uncertainty of monthly data with autocorrelation is compensated by the greater number of data points.

“In other words, satellite measurements are very accurate. That contradicts the alarmist talking point that satellite data is NFG.”
No they are not .. no more accurate than a GCM for temp, as they employ complex algorithms to extract the temperatature, with paparmeterisations included.
V’s 1 to 4 for RSS and v’s 1 to 6 for UAH show that.
And your “darling” dataset, or is it was since v4.0 – RSS’s chief Carl Mears says….
“A similar, but stronger case can be made using surface temperature datasets, which I consider to be more reliable than satellite datasets (they certainly agree with each other better than the various satellite datasets do!)”

Toneb, supreme expert at conflating apples with oranges, says that satellite data is…
…no more accurate than a GCM for temp…
So now a computer model output is considered “data”, equivalent to satellite data?
Admit it, you’re just winging it; a know-nothing using the alarmist tactic of ‘Say Anything’.

1sky1

jpaullanier:
Durbin-Watson is a notoriously weak metric for detecting serial autocorrelation, which doesn’t conform to the AR(1) model used by Brozek in accounting for it. And the corresponding power density shows a very strong spectral peak at multi-decadal periods, a feature totally at odds with your presumed uncorrelated white noise.

dbstealey April 8, 2016 at 10:51 am
Toneb, supreme expert at conflating apples with oranges, says that satellite data is…
…no more accurate than a GCM for temp…
So now a computer model output is considered “data”, equivalent to satellite data?

The satellite data is fine it’s the angular distribution of a certain frequency range of microwave radiation due to emission by O2 (although some other sources such as ice have to be eliminated). This radiation has to be modeled to convert it into a temperature and at this time there have been difficulties in doing this accurately (hence the multiple versions of the software). In fact the difficulty in dealing with the radiation from near the surface has proved to be too difficult and the TLT product appears to be in the process of being abandoned.

1sky1:
I’m confused about where AR(1) is used. I don’t see where Brozek says he uses it, and I don’t see in the Temperature Trend Viewer site where it is used, either. Maybe I have missed something. It seems to me that Brozek needs to address this in his article. I am also wondering, if AR(1) is employed by Nick, does this allow a correction for the p-value for correlation?

1sky1

jpaullanier:
Although Brozek doesn’t mention it explicitly, AR(1) variability is the (unwarranted) default assumption in “climate science.” From his comment, I suspect Nick Stokes resorts to it.

“I don’t see in the Temperature Trend Viewer site where it is used”
It’s used in the calculation of CI’s (which Werner quotes), in the significance shading, and in the plot of t-values. The original post is here. There is discussion here, here, and here.

Nick, I provided the Durbin-Watson statistic for the monthly series I used. It provides no reason to suspect autocorrelation. If there is any further calculation that needs to be performed, that must be documented in a study published in a peer reviewed scientific journal so that anyone can check it.

“Nick, I provided the Durbin-Watson statistic for the monthly series I used.”
Well, I don’t think it is right. The statistic should be about 2*(1-r) where r is the sample autocorrelation, which is positive. And r for monthly residuals is typically about 0.6 or so. See my acfs here. As for published literature, there is plenty. Here is Santer et al, where they use a Quenouille method even for seasonal (see Table 3). Here is Foster and Rahmstorf, where they contend that even Ar(1) isn’t enough.

Nick, I used the standard method for calculating the Durbin-Watson statistic. You may check it yourself. We’ll just have to leave it at that.

By the way, thanks for your comments and the sources you provided, Nick. From your acfs it does look like I have made a mistake, although I don’t see it! So thanks for taking the time to point that out. I need to learn more about autocorrelation anyway, since it applies to temperature anomalies. I’ll be studying that.

Now I see my mistake. I switched Sum of Squared Differences of Residuals and Sum of Squared Residuals. The correct value for the Durbin-Watson statistic from this method is 0.464. Using ACF, the value is 0.514. Both are less than dL, so autocorrelation is significant at both the 0.05 and 0.01 significance levels. My apologies, and thanks again, Nick.

John C.

Davidmhoffer: Actually I do understand. All the analogies we all can come up with still can’t explain how only 3% of the co2 we are responsible for can be so catastrophic to climate. There is only so much energy this little molecule can absorb and radiate. When the sun doesn’t shine, it cools very rapidly. Unless there is a lot of water vapor….curious. I, using my critical thinking for questions, would be much more worried about the 97% I have no control over. This doesn’t address the fact that we will not be able in any practical way in any reasonable time, be able to eliminate more than a small portion of what we are responsible. This ignores the fact that co2 has been much higher in the past and yet here we are arguing about tiny amounts. Seems to me that there are likely other very important drivers of climate such as the sun, oceans, and possibly cosmic rays affecting cloud cover. The fixation on co2 seems a waste of time and very political.

John C. April 7, 2016 at 4:42 pm
Davidmhoffer: Actually I do understand. All the analogies we all can come up with still can’t explain how only 3% of the co2 we are responsible for can be so catastrophic to climate.

As was explained upthread, we’re responsible for a 40% increase over background levels to date. 3% year over year adds up.
When the sun doesn’t shine, it cools very rapidly. Unless there is a lot of water vapor
And would cool even more if there were no CO2. Plus, you have to keep in mind that water vapour concentrations decline with temperature, which in turn declines with altitude. So even at the equator, over the ocean, where water vapour sits at 40,000 ppm, once you get to a certain altitude, water vapour drops off to nearly zero. CO2 on the other hand remains relatively constant to the top of the troposphere. So total effect of CO2 is outsized compared to concentration versus water vapour.
The balance of your argument I would agree with. But start with the proper physics, so that the balance of your argument has more credibility.

John C

Can you prove humans caused all the increase? Based on your data, once co2 is in the atmosphere it never leaves. How did it come down from the much higher levels In the past? Is it hiding with the warming along with the 20 or so year pause in temperature? Which by the way doesn’t support the additive theory you want me to accept. I see ocean cycles along with sun cycles much more likely to be the driver of climate. We shall soon see as cycle 24 ramps down. By the way, cycle 24 has been very weak compared to 22 and 23. We may all wish there was much more co2 in the air if cycle 25 is also weak or even less active. All this focus on co2 in just a smoke screen. I can’t say the purpose but something isn’t logical.

Based on your data, once co2 is in the atmosphere it never leaves.
That is absolute nonsense. I never said any such thing. But I see your uninterested in learning anything.

Can you prove humans caused all the increase?

You may wish to read:
http://wattsupwiththat.com/2010/09/24/engelbeen-on-why-he-thinks-the-co2-increase-is-man-made-part-4/

John C.

Davidm: On my interest in learning…I have read about all the beliefs you have in the boogieman co2. I just come to different conclusions. And reading the posts all the way down from here, It seems I’m not alone. So unless you have information to better support you’re position, don’t even try to infer I lack interest in learning. I just chose to apply this knowledge with logic and observation. Like I said, I have been a business owner for forty years. I grew up during the sky is falling immanent coming ice age. Didn’t buy that one either. You don’t address the sun, ocean cycles, cloud cover and cosmic radiation. You seem to be fixed on co2 even though it’s affect as a driver of climate diminishes above 400 PPM. My main point is the amount humans are adding is small. Very small. A total ban on co2 emissions across the globe would have little impact in temperature. Just look back to when co2 was 300 PPM and unless you subscribe to ALGORE fake movies and overly adjusted data, there is no there there. But I digress.

John C. April 8, 2016 at 9:15 am
I have read about all the beliefs you have in the boogieman co2.

You’ve no idea what my beliefs are. Your statement is insulting.
You’ve drawn a lot of conclusions that I agree with, but they are based on a very poor understanding of the facts. When I point out things that you could learn in order to make your own articulation of the issue stronger, you change the subject and yammer on about different issues.

David, speaking of proper physics, davidmhoffer said:

Assume a square glass jar that is 10 grains on a side. That’s 100 grains in a single layer. Now imagine the jar is 100 grains tall. 10,000 grains in all. Make 99,996 white and just 4 red, same ratio as your example above. Suppose the jar is about 10 cm tall. Now, instantly make all the white grains invisible. What would you see?
Well, you’d see a 10 cm tall jar that is mostly empty, with a fleck of red here and there. You could easily draw a vertical line from the bottom of the jar to the top without hitting any of those red flecks. In fact, you could draw a lot of them.
Now, stack thousands of those jars on top of each other in a tower 14 kilometers high. You’ll need a stack of 140 thousand jars. Now try drawing a line from bottom to top without hitting a red grain. You can’t. In fact, not only that, you can’t even do it without hitting thousands of red grains.

I read this comment a couple of days ago and didn’t continue reading the thread beyond it. The analogy annoyed me and has been bothering me ever since.
There are a couple of things that are quite wrong and if I understand your mind experiment correctly, then your analogy is way off track.
Parts per million (PPM) in climate science is taken to mean PPM by volume (PPMV). It is a mass ratio of the relative volumes of gasses.
Changing the temperature and pressure will not effect the mass ratio, only the volume of the mixture, which is why PPMV is a useful and handy metric.
The absolute amount of stuff (Number of molecules and their total mass.) will change but not the relative ratio of the mix.
To be clear, to prepare a PPMV mixture, you simply choose any VOLUME of a gas (Such as CO2 at any T or P, it doesn’t matter!) and you add it to a million equivalent volumes of air!
The volume of a gas (Such as CO2) includes the molecules and the empty space they move in. What you fail to demonstrate is the very tiny size of molecules and the massive amount of empty space. Even at standard temperatures and pressures (STP) your 10cm cube is 99.9% empty space.
All the “grains” (Molecules or parts.) would occupy just 0.0726% of the volume in the real world. But the volume is far smaller in the experiment because the number of parts (grains) are limited to 10,000 in your cube.
In the real case there would be about 2.5×10^19 parts/grains/molecules and of that, CO2 would occupy just one hundred millionth of the volume.
The grains analogy in a 10cm cube makes no sense in a rational world. Even to get 10,000 molecules this closely packed or their movements constrained to orbits this tight would require a tiny volume at impossible pressures.
Maybe I’m wrong about what you mean by grains/parts/volume and I will stand to be corrected, if so. However those are probably the least of the problems with the analogy!
You then stack 14 km of “cubed” jars to represent the atmosphere but the problem with that is that the density of the real atmosphere lowers with altitude.
At just 5km half the mass of the atmosphere has gone. The volume has doubled and those grains have a lot more empty space to be alone in!
Forget 14km, at 11km, only 25% of the total mass remains (The PPMV is unchanged but the volume is now huge and the number of molecules is low).
To be honest, the effect is actually worse in reality because CO2 is heavier than air and the PPMV does actually change, leading to even less CO2 at altitude.
Oddly, your model succeeds in inverting reality! It represents the exact opposite of what actually happens.
Starting with a cube of sample “stuff” that could only really exist in an impossibly compressed and unimaginably tiny space you then extrapolated. The result exposes how massively opposite reality actually is. There is much more empty space, the parts are actually vanishingly small and when you add on 14 km of additional atmosphere, an analogy that started badly only gets worse as it is sucked into the emptiness of its own vacuity* 😉
*Sorry couldn’t resist the vacuum pun!

I’m sorry that you have totally and completely missed the point of the analogy.

Not to mention that actual measurements of radiated spectrum escaping from the atmosphere back up my analogy 100%. If it is wrong, it is wrong for other reasons than you propose and certainly is not an inversion of reality as you claim. It is consistent with observational data.

Pamela Gray
<