The real IPCC AR5 draft bombshell – plus a poll

Take a look at Figure 1.4 from the AR5 draft (shown below). The gray bars in Fig 1.4 are irrelevant (because they flubbed the definition of them), the colored bands are the ones that matter because they provide bounds for all current and previous IPCC model forecasts, FAR, SAR, TAR, AR4.

Look for the surprise in the graph. 

IPCC_Fig1-4_models_obs

Here is the caption for this figure from the AR5 draft:

Estimated changes in the observed globally and annually averaged surface temperature (in °C) since 1990 compared with the range of projections from the previous IPCC assessments. Values are aligned to match the average observed value at 1990. Observed global annual temperature change, relative to 1961–1990, is shown as black squares  (NASA (updated from Hansen et al., 2010; data available at http://data.giss.nasa.gov/gistemp/); NOAA (updated from  Smith et al., 2008; data available at http://www.ncdc.noaa.gov/cmb-faq/anomalies.html#grid); and the UK Hadley  Centre (Morice et al., 2012; data available at http://www.metoffice.gov.uk/hadobs/hadcrut4/) reanalyses). Whiskers  indicate the 90% uncertainty range of the Morice et al. (2012) dataset from measurement and sampling, bias and coverage (see Appendix for methods). The coloured shading shows the projected range of global annual mean near surface temperature change from 1990 to 2015 for models used in FAR (Scenario D and business-as-usual), SAR (IS92c/1.5 and IS92e/4.5), TAR (full range of TAR Figure 9.13(b) based on the GFDL_R15_a and DOE PCM parameter settings), and AR4 (A1B and A1T). The 90% uncertainty estimate due to observational uncertainty and  internal variability based on the HadCRUT4 temperature data for 1951-1980 is depicted by the grey shading. Moreover, the publication years of the assessment reports and the scenario design are shown.

So let’s see how readers see this figure – remember ignore the gray bands as they aren’t part of the model scenarios.

I’ll have a follow up with the results later, plus an essay on what else was found in the IPCC AR5 draft report related to this.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
372 Comments
Inline Feedbacks
View all comments
December 17, 2012 10:05 am

dikranmarsupial:
Your post at December 17, 2012 at 9:47 am asks me

I would be interested to read the NOAA report you mention, can you give me a full reference so I can look it up?

The statement is in NOAA’s ‘State of the Climate Report’ for 2008.
It can be read at
http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf
The statement under discussion is in a box on page 23.
It says

The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.

Richard

davidmhoffer
December 17, 2012 10:11 am

dikranmarsupial;
Those that want to claim there is statistically significant evidence of a reduction in the rate of warming need to wait until the confidence interval on the trend no longer includes the long term rate.
>>>>>>>>>>>>>>>
You are changing the basis of the argument. This isn’t the point. The point is that the official literature as endorsed by the IPCC maintained that the warming effects of CO2 plus feedbacks were so pronounced that it would be impossible for them to not be easily distinguished from natural variability over a 15 year period. Well, here we are at 16 years, you can argue all you want that the data is insufficient to show a reduction in warming trend (or vice versa) but what you CANNOT argue is that the effects of CO2 are strong enough to stand out from natural variability.
So arguing that warming has or hasn’t stopped isn’t the point. The point is that CO2’s direct and feedback effects are insufficient to differentiate them from natural variability, and as a consequence, we may assume that the order of magnitude of these effects has been grossly over estimated.

dikranmarsupial
December 17, 2012 10:19 am

Richard, thank you for the link, it will be interesting to see how it differs from Easterling and Wehner, which apparently comes to the opposite conclusion.
http://www.agu.org/pubs/crossref/2009/2009GL037810.shtml

December 17, 2012 10:41 am

diketc says
Those that want to claim there is statistically significant evidence of a reduction in the rate of warming need to wait until the confidence interval on the trend no longer includes the long term rate.
Henry says
I looked at all the data from a random sample of 47 weather stations, from all over, balanced by latitude and 70/30 @sea and inland. (longitude does not really matter since earth makes its regular daily and yearly circles and we are looking at all results averaged for the years)
The plot for the acceleration / deceleration of warming/cooling (in degrees K/ t² ) appears to be natural looking, symmetrical, like the curve of a thrown object.
For those that did not read the relevant part of my report, I quote it here for you:
Method
The (black) figures you are looking at in the tables below (allow some time to load up), represent the average change in degrees Celsius (or Kelvin) per annum, from the average temperatures measured during the period indicated. These are the slopes of the least square fit equations or “ linear trend-lines” for the periods indicated, as calculated, i.e. the value before the x.
The average temperature data from the stations were obtained from http://www.tutiempo.net.
I tried to avoid stations with many missing data. Nevertheless, it is very difficult finding weather stations that have no missing data at all. If a month’s data was found missing or if I found that the average for a month was based on less than 15 days of that month’s data, I looked at the average temperatures of that month of the preceding- and following year, averaged these, and in this way estimated the temperatures of that particular month’s missing data.
Results
We note from my 3 tables below that Maxima, Means and Minima have all turned negative (from warming to cooling) between 12 and 22 years ago. The change in signal is best observed in that of the Maxima where we can see a gradual decline of the maximum temperatures from +0.036 degrees C per annum (over the last 38 years) to -0.016 (when taken over the last 12 years).
If we plot the global measurements for the change in Maxima, Means and Minima against the relevant time periods, it can be shown that the best fit for each of the curves is given by a polynomial of the 2nd order (parabolic fit).
Namely, for maxima it is
y= -0.00006 X2+ 0.00480X -0.06393
r²= 0.997
Update
I have added a few more stations, (including Washington DC) which gave me r²= 0.998
The speed of warming/cooling for maxima now is 0.036 from 1974 (38 yrs), 0.029 from 1980 (32 yrs), 0.014 from 1990 (22 years) and -0.016 from 2000 (12 years).
For means, it is
y= -0.0001 X2 +0.0064X – 0.0778
r²= 0.959
For minima, it is
y= -0.00008 X2 + 0.00408X – 0.04178
r²= 0.985
Using the maxima plot, we note that at 0 (zero) when there was a turning point, i.e. no warming or cooling, we find x=17 years. From this sample of weather stations I can therefore estimate with high accuracy that earth received its maximum energy input from the sun via the atmosphere during 2012-17=1995.
(if we are tempted to look at the root of same binomial on the other side, i.e. when global warming started, we find 68, suggesting that the global warming cycle started officially somewhere in 2012-68=1944. UPDATE: I realized this result is speculative, as I do not have any real measurements from 1944-1973 but we are using an approximation from a probable plot. However, I did realize since some time ago that the plot I was looking at is really like an a-c wave. I have subsequently been able to determine that the best sine wave for this plot would be one with a wavelength of 88 years. That would mean that the beginning of warming started somewhere around 1995-44=1951. That means we are now on a cooling curve until ca. 1995+44=2039.)
It can also be shown that the nature of the graph for means is one that lags a bit on the graph for maxima: earth has a store where it keeps its energy and a lot of that energy only comes out a bit later. Although the plot for means with rsquare 0.959 is still impressive, showing there is a definite relationship, I would not use it to determine the roots to give me the actual time when earth reached its maximum energy output (i.e. when it was the “warmest”). However, I would generally agree with the available datasets like RSS, Hadcrut3 and Hadsst2 that that must have been a few years after 1995.
end quote
Now let us look specifically at these data:
The speed of warming/cooling in degrees K/ annum for maxima now is 0.036 from 1974 (38 yrs), 0.029 from 1980 (32 yrs), 0.014 from 1990 (22 years) and -0.016 from 2000 (12 years);
now, do any plot that you like with it, i.e. binomial, linear or natural log or whatever and tell me that the curve fit that you get is not statistically significant?
Finally, I want to say that this exercise of mine is definitely repeatable, don’t you (all) think?
(I am thinking of the lazy buggers and so-called “climate scientists” in the universities who could utilize the people in their classes to do -this very simple- applied practical statistical work)

December 17, 2012 10:46 am

dikranmarsupial:
Your post at December 17, 2012 at 10:19 am says in total

Richard, thank you for the link, it will be interesting to see how it differs from Easterling and Wehner, which apparently comes to the opposite conclusion.
http://www.agu.org/pubs/crossref/2009/2009GL037810.shtmlYour post at December 17, 2012 at 9:47 am asks me

Pardon?!
That paper’s Abstract says in total

Numerous websites, blogs and articles in the media have claimed that the climate is no longer warming, and is now cooling. Here we show that periods of no trend or even cooling of the globally averaged surface air temperature are found in the last 34 years of the observed record, and in climate model simulations of the 20th and 21st century forced with increasing greenhouse gases. We show that the climate over the 21st century can and likely will produce periods of a decade or two where the globally averaged surface air temperature shows no trend or even slight cooling in the presence of longer‐term warming.

The paper is pay-walled so I have not read it, but the phrase “a decade or two” is ambiguous in the Abstract. It could mean ‘periods of 10 to 20 years duration’ or ‘one or two periods each of 10 years duration’.
Either interpretation does not give confidence in the paper because such ambiguity in the Abstract implies it was approved by ‘pal review’: why not say “multiple periods of up to 10 years” or “periods of up to 20 years”?
Also, the Abstract admits it is ‘damage limitation’ against reports in “websites, blogs and articles in the media” but makes no mention of e.g. the 2008 NOAA statement.
Frankly, I do not intend to pay to read such a paper.
Richard

dikranmarsupial
December 17, 2012 10:57 am

This question at stats.stackexchange.com (a site for asking questions related to statistics where the answers are voted on by other users, many of which are experienced statisticians) might be of interest:
http://stats.stackexchange.com/questions/12461/how-to-specify-the-null-hypothesis-in-hypothesis-testing
The answer with the most votes begins “A rule of the thumb from a good advisor of mine was to set the Null-Hypothesis to the outcome you do not want to be true i.e. the outcome whose direct opposite you want to show.”

December 17, 2012 11:10 am

Max:
Your post addressed to me at December 17, 2012 at 10:24 am provides an illustration concerning a hypothetical town and says

And the null hypothesis was not a no-change case, it postulated a change or a non-zero slope (or equivalently, correlation coefficient). This was precisely what you said was disallowed with your point about “no subjectivity” and “no change” and, because it is allowed, why I said you had got it wrong.

No!
I thought I had explained this in my previous reply to you.
THAT IS A ‘NO CHANGE CASE’.
The system was said to be experiencing “a non-zero slope” so the system behaviour was that “slope”.
Similarly, an object may be in free fall. The Null Hypothesis says the object will continue to fall under acceleration due to gravity and deceleration due to drag. A change from the Null Hypothesis occurs when the object hits the ground. The fact that the object’s change of height with time (i.e. vertical velocity) had “a non-zero slope” does not prevent the system behaviour being “a no change case” until the object hits the ground.
And you ask me

Did you look up Fisher’s z- transformation – it is about testing whether an observed correlation coefficient could derive from a population of arbitrary (not necessarily zero) correlation so bang on what you said couldn’t be done? It’s especially relevant as the slope coefficient of a regression is a simple linear multiple of the correlation coefficient so what goes for the one goes for the other.

I am aware of it but I did not refresh my memory by looking it up: If I did then I am certain my understanding would still be bettered by your superior knowledge and understanding.
I agree that it is a good analogy of determination of whether the global temperature change has experienced a significant change. But, again, I fail to see how this example alters the truth of what I said about the Null Hypothesis.
In conclusion, I recognise that we are discussing on a blog for the benefit of others as well as ourselves. Hence, it is important to debate in words so others can understand. However, in this case, I would not object if you wanted to add mathematical expressions to define what you intend by your words because it is possible that may avoid us ‘talking past each other’.
Richard

dikranmarsupial
December 17, 2012 11:27 am

Richard, having read the paper again (Easterling and Wehner), they do mean a period of a decade or two, but the experimental results are based solely on decadal periods, so that line in the abstract is indeed not well supported. Note that the NOAA report does cite Easterling and Wehner on page 23, so it is clear that NOAA consider the paper to be of good quality and directly relevant to the topic we are discussing.
Having read the relevant pages of the NOAA report, I find the conclusion is based on the analysis of only one model (HADcm3), so while a 15 year period of little or no warming is inconsistent with HADcm3, that does not mean that it is inconsistent with all models, nor of the CMP3 ensemble used in the IPCC AR4 WG1 report. I think it is an overstatement to conclude that all models are invalidated on that basis. It could well be that HADcm3 under-estimates natural variability.
A more recent paper on this topic is Santer et al (http://www.agu.org/pubs/crossref/2011/2011JD016263.shtml), which IIRC uses the CMIP3 model ensemble and draws the conclusion that 17 years is the required period to expect to be able to identify warming. So I suspect the difference lies in whether you analyse one particular model, or whether you analyse the ensemble that the IPCC actually used. However, regardless of the exact time period, it is pretty clear that the observations are very much at the lower end of what can be considered plausible, given the CMIP3 A1B model projections, so if the planet is not warming then there will soon become a point where there can be no equivocation.
If the aim is simply to falsify the models, then that is very easy, all you have to do is to look at the decline in Arctic sea ice extent, which has been significantly more rapid than any of the models plausibly predict. All models are wrong, but some are useful (GEP Box).

mpainter
December 17, 2012 11:32 am

dikranmarsupial
So you link to a paywalled study to support a point. And no, your next link is of no interest. Go back to your marsupials and please stay there.

December 17, 2012 11:41 am

Henry says
now, do any plot that you like with it, i.e. binomial, linear or natural log or whatever and tell me that the curve fit that you get is not statistically significant?
Henry says
sorry, that should have been
now, do any plot that you like with it, i.e. binomial, linear or natural log or whatever and tell me that the curve fit that you get, i.e. the curve going downwards, showing a cooling trend, is not statistically significant?

dikranmarsupial
December 17, 2012 11:44 am

BTW, google scholar is quite useful for finding copies of papers that have been made available to download for free (e.g. pre-prints). Easterling and Wehner’s paper can easily be obtained this way.
http://scholar.google.co.uk/scholar?hl=en&q=Easterling+and+Wehner

December 17, 2012 11:59 am

dikranmarsupial:
At December 17, 2012 at 11:27 am you say

If the aim is simply to falsify the models, then that is very easy, all you have to do is to look at the decline in Arctic sea ice extent, which has been significantly more rapid than any of the models plausibly predict. All models are wrong, but some are useful (GEP Box).

Agreed!
The models say that accelerated warming – so ice loss – should occur in both polar regions. But
1.
The Arctic shows more ice loss than the models indicate:
partial model fail.
2.
The Antarctic shows ice GAIN:
complete model fail.
Conclusion: the models are useless.
But I have known they are useless – and why they are useless – since 1999.
I have explained this repeatedly on WUWT and it seems I need to copy it again. For example, I wrote the following post on the thread at
http://wattsupwiththat.com/2011/08/02/aerosol-sat-observations-and-climate-models-differ-by-a-factor-of-three-to-six/#comment-711396
Richard
****************
Richard S Courtney says:
August 2, 2011 at 6:46 am
Friends:
The article quotes Penner saying:
“The satellite estimates are way too small,” said Joyce Penner, the Ralph J. Cicerone Distinguished University Professor of Atmospheric Science. “There are things about the global model that should fit the satellite data but don’t, so I won’t argue that the models necessarily are correct. But we’ve explained why satellite estimates and the models are so different.”
Hmmm. Let us consider what we know about how the models incorporate climate sensitivity and aerosol effects.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:
”One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy. Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.”
And, importantly, Kiehl’s paper says:
”These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.”
And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Thanks to Bill Illis, Kiehl’s Figure 2 can be seen at
http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:
”Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.”
It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^-2 to 2.02 W/m^-2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^-2 to -0.60 W/m^-2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
In summation, all the model projections of future climate change are blown out of the water by the findings of Penner at al.
Richard

Jeff B.
December 17, 2012 4:52 pm

Does this mean we can remove Hansen and Schmidt from the government payroll and get back to some serious science? Let’s build our Thorium future.

December 17, 2012 4:58 pm

Geoff Sharp says:
December 17, 2012 at 6:43 am
[snip – Geoff, I’ve banned you before for your over the top remarks, and let you back in against my better judgment – this time, with this ugly comment it is permanent, beat it, zealot. – Anthony]
A bridge too far Anthony….expect a backlash.
—————————-
REPLY: Oh, nice….threats. How mature. All because you and your friends insist on calling a long term solar minimum that hasn’t been called yet and name that hasn’t been approved by the solar science community. I don’t call it “the Eddy Minimum” in day to day comments because it is premature. You however call it the “Landschiedt minimum” at every opportunity you get to the the point of being annoying. I point out when you and your friends call it the “Landscheidt minimum” is wrong when:
A. It hasn’t happened for certain yet, (one solar cycle does not a grand minimum make).
B. It hasn’t been officially named yet.
C. The solar science community has taken up the idea to name it for Jack Eddy, discoverer of the Maunder Minimum.
Do you listen to yourselves? This is zealotry. – Anthon

The backlash will be in the form of a campaign against you on this topic of the “Eddy Minimum” that you and Svalgaard are pushing. This website has not agreed with your agenda to take the naming writes away from Landscheidt who is excepted by most to be the person most deserving. There was a lot of opposition to your proposal in your article http://wattsupwiththat.com/2009/06/13/online-petition-the-next-solar-minimum-should-be-called-the-eddy-minimum/ and a previous article on your blog the majority suggested Landscheidt even though you provided no option for his name.
You do not even have support on your own blog, yet you try to whitewash your own views through. You have a cheek calling me the zealot.
If I am banned you will need to remove my sunspot count comparison graph on your solar reference page.

Jane R
December 17, 2012 9:29 pm
December 17, 2012 9:37 pm

Those “IPCC model forecasts” are not forecasts (aka predictions) but rather are projections. Though climatologists often conflate projections with predictions, they are different concepts.

December 17, 2012 11:06 pm

@Jane:
Jane R says:
December 17, 2012 at 9:29 pm
Great graphic on this! http://www.skepticalscience.com/graphics.php?g=47
+++++++
Jane that new red line which starts below average at the start and stripes right to the max of today… and still the new red line shows cooling of 0.5C over the past 15 years!

December 18, 2012 1:33 am

Richard says
In summation, all the model projections of future climate change are blown out of the water by the findings of Penner at al.
Henry says
Myself and others here have clearly established that there is no scientific basis for man made climate change. Hence, we should concentrate on the observation of the natural processes that dominate the weather.
IMHO I think we only need to do a best fit for the actual observed differences in temperatures,
like I have done here :
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
By looking at the curve of Anchorage just below the averaged “global” curve, you can see that each weather station has its own sine wave but the wavelength of 88 years stays the same.
The difference between the heights of the tops and lows of the curves of each weather station depends largely on the make up of the chemicals lying on top of the atmosphere directly above it.
Note that almost all major data sets show an uptrend from about 1925, which, in actual fact, is not in contradiction with my particular sine wave fit. Namely, before 1925 the global temp. record is a bit murky. In those days they had not even realized that thermometers, once manufactured, need to be re-calibrated every now and then…..So, basically, we do not have a reliable (global) base line for global temperature until a few decades after 1925.
In this respect, it is clear that my results show that we are going to cool. My results should be confirmed by others asap. This natural climate change which can thus be predicted from each particular weather station may bring more rain and more cold and snow at certain places and/or less at others. Those farming at the colder high latitudes or risky areas should be warned to take their farming elsewhere….
Now that is what climate science is supposed to be for.

December 18, 2012 3:40 am

Terry Oldberg:
At December 17, 2012 at 9:37 pm you say

Those “IPCC model forecasts” are not forecasts (aka predictions) but rather are projections. Though climatologists often conflate projections with predictions, they are different concepts.

I agree.
A prediction is a scientific term.
It is a forecast (or hindecast) which can be compared to reality to discern if the method which generated it displays forecasting skill. Thus, a prediction can be used to determine faults in the understandings which formulate the method that generated the prediction, and this enables faulty understandings to be amended. Predictions are made by all scientific disciplines; i.e. physics, chemistry, biology, etc..
A projection is a pseudoscientific term.
It is a forecast (or hindecast) which can be used for political purposes. If a projection fails to forecast reality then the projection is amended post hoc as a method to avoid criticism of the political objective. Projections are made by all pseudoscientific disciplines; i.e. astrology, palmistry, ‘climate science’, etc..
Richard

Philip Shehan
December 18, 2012 4:48 am

mpainter says:
December 17, 2012 at 9:24 am…
In your response to your questions to me.
What date are you selecting for the start of the 16 year period? If you mean starting from 1996, and look at mean data for the data sets Hadcrut3, Gistemp, UAH and RSS there is warming at the rate of 0.1 C per decade. If you cherry pick the extreme southern summer el nino yers of 1997/98, there is slight but probably not statistically significant warming trend. If you start in 1999, there is again an upward trend of 0.1 C per decade. The highly specific cherry picking of the el nino southern summer years to make a claim of no warming is totally unscientific.
http://www.woodfortrees.org/plot/wti/from:1995/to:2013/plot/wti/from:1996/to:2013/trend/plot/wti/from:1998/to:2013/trend/plot/wti/from:1999/to:2013/trend
On the use of error bars: No measurement is exact. Measurement errors are classified as “systematic” or “random” (I won’t bother here explaining the difference), and if different measurements are combined to give data points, the errors must be summed. The point is that the true number can only be said to lie within a range of values to a certain degree of statistical confidence, not necessarily that quoted as the “headline” value.
In published data, errors are not just given graphically, tables of data must also contain the error range. A manuscript submitted without the conficence limits would be rejected.
Note that even without the error bars, most of the data points lie within the projection bands, albeit at the low end of the range.
By definition, climate models do not specifically project any periods of no warming, low warming, or extreme warming due to unpredictable future occurences such as the 1998 el nino year or la nina years or the mount Pinatubo eruption. Predictable variations such as solar cycles will make upward and downward contibutions superimposed on the long term upward trend due to accumulation of greenhouse gases. The job of the models is to projects long term (ie multidecadel) trends where such short term influences average out. Every climatologist (or other scientist for that matter) knows that there will be variations in the long term trend. It is a statement of the bleeding obvious.
I do not know what specific “tipping points” or “panic talk” you are referring to but they are entirely irrelevant to the question posed by Mr Watts: “So let’s see how readers see this figure”
Ditto your question on CO2 mitigation.

Lars P.
December 18, 2012 5:04 am

Hm, I wonder what do the 2% who see the observed temperature above the model scenario ranges smoke?

Graham W
December 18, 2012 5:18 am

About these models. OK, so there’s no empirical evidence for CO2 being the primary cause of global warming…but in fairness, how can there be? You can’t very well make an experiment where you have two Earths, one control Earth where the CO2 levels are kept the same, and another Earth where CO2 levels are increased, observe the temperatures over time on both, and say “hey presto, there’s your evidence” either way, when you get the results.
So along comes modelling. Well, it actually seems quite logical in principle. You can’t physically have two Earths, so you try to create a computer model to simulate the climate, then run projections of future climate and see if it correlates as time passes. Then if the projections correlate with reality, there’s your empirical evidence. It’s not actually a bad idea, let’s face it.
Two problems/questions with how this has all panned out:
1) Surely the time to alarm and disturb and harangue everyone about cutting CO2 emissions was AFTER the experiment was completed. Once the models with CO2 as the primary driver gave projections that correlated with reality, over a significant enough time period that rules out any “accidental” correlation…i.e. once there was empirical evidence.
2) As far as I understand it there’s a list of forcings that go into the models, in order of their perceived significance (I could well be wrong here so correct me if I am)…why weren’t any models set up with forcings in a different order, just in case they’d got something wrong? i.e. have CO2 as a lower level forcing than it was run at. Then we could be sitting here now looking at various different “projections vs. reality” graphs from the IPCC report. We could say “CO2 didn’t pan out very well as the primary driver, let’s see how this other one turned out”…etc etc.

Bob Layson
December 18, 2012 5:41 am

The complete giveaway is the need to point to anything and everything but the thermometers. Desperate CAGWers scream about the manifest effects of something when they need only flourish the evidence of the instruments in the sceptics faces.

Bill
December 18, 2012 6:58 am

With error bars that large on the observed temperatures, one can do a pretty good linear fit and get a fairly small slope with zero being within the realm of possibility. If temp.’s had not dipped in 1992 due to Pinutabo, the slope would be even smaller for the last 22 years. As I noted earlier, they only seem to use such large error bars when they want to show that the models extremes are still “close” to the actual temp.’s The next 10-15 years will be very interesting. Nature will tell us which side has been closer to the truth all along. What will hurricane and tornado data look like? Once we have been following arctic ice for 40-45 years instead of 30, what will we see? The ocean cycles appear to be 30 and 60 years so again having more than 30 years of satellite and modern instrument data will be interesting. We’ll know how significant cosmic rays and clouds are and we’ll see what kind of solar minimum we may be in and what its effects are.
Just be patient (or one might say scientific) and we’ll have better numbers and be closer to the truth. Even if natural variability and random events turn out to be a much bigger factor than supposed now, we may know a bit more about that too.

December 18, 2012 6:59 am

Graham W says
Surely the time to alarm and disturb and harangue everyone about cutting CO2 emissions was AFTER the experiment was completed.
Henry says
I am most interested to hear from you what experiment you are referring to.
I certainly could not find the balance sheet that I was looking for?
http://blogs.24.com/henryp/2011/08/11/the-greenhouse-effect-and-the-principle-of-re-radiation-11-aug-2011/

1 8 9 10 11 12 14