Guest Essay by: Ken Gregory
The Canadian Centre for Climate Modeling and Analysis located at the University of Victoria in British Columbia submitted five runs of its climate model CanESM2 for use in the fifth assessment report of the International Panel on Climate Change (IPCC). The climate model produces one of the most extreme warming projections of all the 30 models evaluated by the IPCC. (See Note 1.) The model badly fails to match the surface and atmosphere temperature observations, both globally, regionally, as presented in six graphs.
Global Trends
The graph below compares the near-surface global temperatures to the model runs.
Figure 1. Canadian climate model simulations of near-surface global temperatures and three datasets of observations.
The five thin lines are the climate model runs. The thick black line is the average (mean) of the five runs. The satellite data in red is the average of two analysis of lower troposphere temperature. The radiosonde weather balloon data is from the NOAA Earth Space Research Laboratory. The surface data is the HadCRUT4 dataset. (See Note 2 for the data sources.) The best fit linear trend lines (not shown) of the model mean and all datasets are set to zero at 1979, which is the first year of the satellite data. We believe this is the best method for comparing model results to observations.
Any computer model that is used to forecast future conditions should reproduce known historical data. A computer climate models that fails to produce a good match to historical temperature data cannot be expected to give useful projections.
Figure 1 shows that the computer model simulation after 1983 produces too much warming compared to the observations. The discrepancy between the model and the observation increase dramatically after 1998 as there has been no global near-surface warming during the last 16 years. With the model and observation trends set to zero in 1979, the discrepancy between the model mean of the near-surface global temperatures and the surface observations by 2012 was 0.73 °C. This discrepancy is almost as large as the 0.8 °C estimated global warming during the 20th century. The model temperature warming trend as determined by the best fit linear line from 1979 to 2012 through the model mean is 0.337 °C/decade, and the average trend of the three observational datasets is 0.149 °C/decade. Therefore, the model temperature warming rate is 226% of the observations (0.377/0.147 = 226%).
The large errors are primarily due to incorrect assumptions about water vapor and cloud changes. The climate model assumes that water vapor, the most important greenhouse gas, would increase in the upper atmosphere in response to the small warming effect from CO2 emissions. A percentage change in water vapor has over five times the effect on temperatures as the same percentage change of CO2. Contrary to the model assumptions, radiosonde humidity data show declining water vapor in the upper atmosphere as shown in this graph.
The individual runs of the model are produced by slightly changing the initial conditions of several parameters. These tiny changes cause large changes of the annual temperatures between runs due to the chaotic weather processes simulated in the model. The mean of the five runs cancels out most of the weather noise because these short term temperature changes are random.
The climate model assumes that almost all of the temperature changes are due to anthropogenic greenhouse gas emissions and includes insignificant natural causes of climate change. The simulated weather noise includes the effects of ENSO (El Niño and La Niña), but does not include multi-decadal temperature changes due to natural ocean oscillations or solar-induced natural climate change other than the small changes in the total solar irradiance. The historical record includes the effects of large volcanoes that can last for three years. The projections assumes no large future volcanoes. There has been no significant volcanoes since 1992 that could have affected temperatures as shown in this graph of volcanic aerosols.
The model fails to match the temperature record because the model over estimates the effects of increasing greenhouse gases and it does not include most natural causes of climate change.
Figure 2 compares the mid-troposphere global temperatures at the 400 millibar pressure level, about 7 km altitude, to the model. (millibar = mbar = mb. 1 mbar = 1 hPa.)
Figure 2. Canadian climate model simulations of mid-troposphere global temperatures and two datasets of observations. The balloon data is at the 400 mbar pressure level, about 7 km altitude.
The discrepancy in 2012 between the model mean of the global mid-troposphere temperatures and the satellite and balloon observations were 1.26 °C and 1.04 °C, respectively. The model temperature trend is 650% of the satellite trend.
The satellites measure the temperature of a thick layer of the atmosphere. The satellite temperature weighting function describes the relative contributions that each atmospheric layer makes to the total satellite signal. We compared the balloon trends weighted by the lower troposphere satellite temperature weighting functions to the near-surface observed trends for global and tropical data. Similar comparisons were made for the mid-troposphere. The weighted thick layer balloon trends for the lower and mid-troposphere were similar to the near-surface and 400 mbar balloon trends, respectively. We conclude that the satellite lower-troposphere and mid-troposphere temperature trends are approximately representative of the near-surface and 400 mbar temperature trends, respectively. (See notes 3 for further information.)
Tropical Trends
Figure 3 compares the near-surface temperatures in the tropics from 20 degrees North to 20 degrees South latitude to the climate model simulations. The model temperature trend is 300% of the average of the three observational trends. The discrepancy of model to near-surface observation trends in the tropic is much greater than for the global average.
Figure 3. Canadian climate model simulations of near-surface tropical temperatures and three datasets of observations.
Figure 4 compares the warming trend in the tropical mid-troposphere to the observations. The increasing tropical mid-troposphere water vapor in the model makes the warming trend at the 400 mbar pressure level 55% greater than the surface trend.
The discrepancy in 2012 between the model mean of the mid-troposphere tropical temperatures and the balloon observations was 1.33 °C. The model temperature trend is 560% of the average of the two observational trends.
Figure 4. Canadian climate model simulations of mid-troposphere 400 mbar tropical temperatures and two datasets of observations.
The discrepancy is even greater at the 300 mbar pressure level, which is at about 9 km altitude.
Figure 5 compares the model to the balloon temperatures at the 300 mbar pressure level in the tropics. The tropical warming trend at 300 mbar in the model is exactly twice the model surface trend. In contrast, the temperature trend of the balloon data at 300 mbar is an insignificant 3% greater than the surface station trend. The discrepancy in 2012 between the model mean of the mid-troposphere tropical temperatures at the 300 mbar level and the balloon observations was 1.69 °C.
The temperature trends (1979 to 2012) of the tropical atmosphere at 7 km and 9 km altitude are an astonishing 470% and 490% of radiosonde balloon data. These are huge errors in the history match!
Figure 5. Canadian climate model simulations of mid-troposphere 300 mbar tropical temperatures and weather balloon observations.
South Trends
In the far south, the near-surface modeled temperature tend is in the opposite direction from the observations. The figure 6 compares the model near-surface temperatures from 50 degrees South to 75 degrees South latitude to the observations. The modeled temperatures from 1979 are increasing at 0.35 °C/decade while the surface temperatures are decreasing -0.07 °C/decade.
Temperatures over most of Antarctica (except the Antarctic Peninsula) have been falling over the last 30 years. The Antarctic sea ice extent is currently 1 million square km greater than the 1979 to 2000 mean. The sea ice extent has been greater than the 1979 to 2000 mean for all of 2012 and 2013 despite rising CO2 levels in the atmosphere.
Figure 6. Canadian climate model simulations of near-surface southern temperatures (50 S to 75 S latitude) and two datasets of observations.
Summary of Trend Errors
The table below summarizes the model trend to observation trend ratios.
| Model Trend to Observations Ratios (1979 to 2012) | |||
| Model/Satellite | Model/Balloon | Model/Surface | |
| Global Surface | 254% | 209% | 220% |
| Global 400 mb | 650% | 315% | |
| Tropics Surface | 304% | 364% | 249% |
| Tropics 400 mb | 690% | 467% | |
| Tropics 300 mb | 486% | ||
| South Surface | 3550% | -474% |
The table shows that the largest discrepancies between the model and observations are in the tropical mid-troposphere. The model error increases with altitude and is greatest at the 300 mbar pressure lever at about 9 km altitude. The ratio of the modeled tropical mid-troposphere 300 mbar warming rate to the surface warming rate is 200%, and is a fingerprint of the theoretical water vapor feedback. This enhanced warming rate over the tropics, named the “hot spot”, is responsible for 2/3 of the warming in the models. The fact that the observations show no tropical mid-troposphere hot spot means that there is no positive water vapor feedback, so the projected warming rates are grossly exaggerated.
Model results with large history match errors should not be used for formulating public policy. A model without a good history match is useless and there can be no confidence in its projections. The lack of a history match in the Canadian model output shows that the modeling team have ignored a fundamental requirement of computer modeling.
A global map of the near-surface air temperature from the model for April 2013 is here.
Anti-Information
Patrick Michaels and Paul “Chip” Knappenberger compared the model output to actual 20th century temperature data to determine what portion of the actual data can be explained by the model. They wrote, “One standard method used to determine the utility of a model is to compare the “residuals”, or the differences between what is predicted and what is observed, to the original data. Specifically, if the variability of the residuals is less than that of the raw data, then the model has explained a portion of the behavior of the raw data.”
In an article here, they wrote “The differences between the predictions and the observed temperatures were significantly greater (by a factor of two) than what one would get just applying random numbers.” They explained that a series of random numbers contain no information. The Canadian climate model produces results that are much worse than no information, which the authors call “anti-information”.
The Canadian Model Contributes to Fuel Poverty
The results from the Canadian climate model were used in a U.S. Global Change Research Program report provided to the US Environmental Protection Agency to justify regulating CO2. The authors of that report were told that the Canadian climate model produces only anti-information. They confirmed this fact, but published their report unchanged. The Canadian government has indicated it will follow the lead of the U.S.A. in regulating CO2 emissions.
The Canadian climate model is also used by the IPCC to justify predictions of extreme anthropogenic warming despite the fact that the model bears no resemblance to reality. As does the climate model, the IPCC ignores most natural causes of climate change and misattributes natural climate change to greenhouse gas emissions. Here is a list of 123 peer-reviewed papers published from 2008 to 2012 on the solar influence on climate that were ignored by the IPCC in the fifth assessment report.
Climate alarmism based on climate models that don’t work has so far cost the world $1.6 trillion in a misguided and ineffective effort to reduce greenhouse gas emissions. These efforts have caused electricity prices to increase dramatically in Europe causing fuel poverty and putting poor people at risk. High fuel costs and cold winter weather are blamed for 30,000 excess deaths in Britain last year. Europe’s energy costs have increased by 17% for consumers and 21% for industry in the last four years. The Canadian climate model failures has contributed to this misery.
Canadian politicians and taxpayers need to ask why we continue to fund climate models that can’t replicate the historical record and produce no useful information.
Ken Gregory, P.Eng.
Ten years of providing independent
climate science information
Notes:
1. The Canadian Earth System Model CanESM2 combines the CanCM4 model and the Canadian Terrestrial Ecosystem Model which models the land-atmosphere carbon exchange. Table 9.5 of the IPCC Fifth Assessment Report Climate Change 2013 shows that the CanESM2 transient climate sensitivity is 2.4 °C (for double CO2). The 30 model mean transient climate sensitivity 90% certainty range is 1.2 °C to 2.4 °C.
2. The Canadian climate model CanESM monthly data was obtained from the KNMI Climate Explorer here. The satellite data was obtained from the University of Alabama in Huntsville here (LT) and here (MT), and from Remote Sensing Systems here (LT) and here (MT). The radiosonde weather balloon data was obtained from the NOAA Earth System Research Laboratory here. The global surface data is from the HadCRUT4 dataset prepared by the U.K. Met Office Hadley Centre and the Climate Research Unit of the University of East Anglia, here. The HadCRUT4 tropical data (20 N to 20 S) was obtained from KNMI Climate Explorer. An Excel spreadsheet containing all the data, calculations and graphs is here.
3. The average global weather balloon trends of all pressure levels from 1000 mbar to 300 mbar weighted by the LT satellite temperature weighting functions is only 3% greater than the balloon trend at 1000 mbar, confirming that the LT satellite trends is representative of the near-surface temperature trend. The weighted average tropical weather balloon trends are 3% greater than the average of the 1000 mbar balloon trend and the surface station trend. The temperature weighting functions for land and oceans were obtained from the Remote Sensing Systems website. The average global and tropical balloon trends of all pressure levels weighted by the mid-troposphere (MT) satellite temperature weighting functions are 13% and 4% greater than the balloon trends at 400 mbar. If the MT satellite trends were adjusted by these factors to correspond to the 400 mbar pressure level, the global and tropical satellite trend adjustments would be -0.017 °C/decade and -0.004 °C/decade, respectively. Since the MT satellite trends are already less than the 400 mbar balloon trends, no adjustments were applied.
=============================
PDF version of this article is here
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Jim G says: October 25, 2013 at 7:09 am
This page says; https://www2.ucar.edu/climate/faq/what-average-global-temperature-now
“Between 1961 and 1990, the annual average temperature for the globe was around 57.2°F (14.0°C), according to the World Meteorological Organization.”
The HadCRUT4 temperature anomaly for 1961 – 1990 is zero (actually -0.00051 C).
The anomaly for the first 8 months of 2013 is 0.468 C. That gives the average 2013 temperature about 14.47 C.
Look at this amazing graph of the actual modeled temperatures;
http://curryja.files.wordpress.com/2013/10/figure.jpg
from http://judithcurry.com/2013/10/02/spinning-the-climate-model-observation-comparison-part-ii/
The year 2000 modeled temperatures of the climate model ensemble ranges from 12.5 C to 15.7 C.
That is a huge range! Judith Curry writes, “how [can] these models produce anything sensible given the temperature dependence of the saturation vapor pressure over water, the freezing temperature of water, and the dependence of feedbacks on temperature parameter space.”
Clive, I see this from a different angle. The models describe the modelers’ idea of global warming in response to increased anthropogenic CO2 quite well. In fact I would say that the modelers are as a group, quite proud of the projections. I strongly doubt they were/are interested at all in natural climate variability. I propose this is why they do not seek to change these models towards natural climate variability. It would, by default, destroy the alarmism as soon as the word got out that the models have been changed to place more emphasis on natural swings. It would also instantly dry up the money they receive from IPCC to provide the next round of projections.
In reality there is absolutely no money or rewards in modeling natural variability. No grants. No media coverage, no journal articles, and no chance at a Nobel Peace or Science Prize.
So to extend your last sentence, the models cannot describe how cloud cover or water vapor changes because the modelers do not want to describe these things, not because they are poorly understood.
@clivebest – You make a lot of assumptions. Why don’t you go looking for some evidence, instead? There have been papers written on how climate models are written, for example – http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5337646
The main article makes a common mistake about the way in which climate models work. It says:
“The large errors are primarily due to incorrect assumptions about water vapor and cloud changes. The climate model assumes that water vapor, the most important greenhouse gas, would increase in the upper atmosphere in response to the small warming effect from CO2 emissions.”
The model does not assume this at all. This is an emergent property of the equations that are written into the model code.
These equations run from the relatively simple Navier-Stokes equations of fluid dynamics to more complicated equations used to approximate the behaviour of convective storms, or clouds in the boundary layer. These equations are written in different ways in the different climate models, and somehow the interactions between the equations produce models with a high climate sensitivity, or with a low climate sensitivity.
There is some interest among climate scientists in trying to devise a way to discriminate between “better” and “worse” climate models by comparing them to recent climate, but my main point is that the sensitivity of a model is not predictable in advance by the scientists who are writing the model code, because it is too complicated for that.
They have to run the model and see how it comes out.
Alistair Ahs says:
“They have to run the model and see how it comes out.”
As the article makes clear, the model is wrong.
I think über-warmist Phil Jones’ chart shows reality. Well before CO2 began to rise significantly, the planet went through the same cycles.
It is all natural variability. That is the null hypothesis. If you can falsify that, you will be the first. If you can’t, though, the default assumption must be natural variability. Everything else is evidence-free conjecture.
@Alistair Ahs.
I try not to make any assumptions. The use of software engineering techniques and module testing just reduces the probability of bugs per line of code. GCM models consist of millions of lines of historic and new code. No matter how sophisticated the procedures there will still be bugs present. Any large piece of software needs continuous updates to fix bugs.
The emergent properties of models from the underlying physical equations is of course correct. Some equations are better known than others. MHD equations are more complex than hydrodynamics, and both don’t describe turbulence. As afar as I know cloud models are still only based on large grids whereas real cloud formation is very complex.
All models are ideal simulations which can be used to interpret and understand observations. This is surely what is happening in hindcasts where aerosol forcing appears to me to be fine tuned to fit the data. However using the same models to predict future observations is clearly far less certain. This inherent uncertainty does not seem to make it through to policy makers.
[PEGG is what group? Mod]
PEGG is the Journal of APEGA, (formerly APEGGA, the Association of Professional Engineers, Geologists and Geophysicists of Alberta). http://www.apegga.org/
In 2002 I was asked by APEGGA to write an article as one-side of a debate with the Pembina Institute on the science of global warming. The debate is available at
http://www.apegga.org/Members/Publications/peggs/WEB11_02/kyoto_pt.htm
I enlisted the participation of Dr. Sallie Baliunas, Harvard U Astrophysicist, and Dr. Tim Patterson, Carleton U Paleoclimatologist.
In our rebuttal we wrote eight statements, all of which have since been proven correct in those countries that fully embraced humanmade global warming mania.
Our eight statements were directed against the now-defunct Kyoto Protocol, and are as follows:
1. Climate science does not support the theory of catastrophic human-made global warming – the alleged warming crisis does not exist.
2. Kyoto focuses primarily on reducing CO2, a relatively harmless gas, and does nothing to control real air pollution like NOx, SO2, and particulates, or serious pollutants in water and soil.
3. Kyoto wastes enormous resources that are urgently needed to solve real environmental and social problems that exist today. For example, the money spent on Kyoto in one year would provide clean drinking water and sanitation for all the people of the developing world in perpetuity.
4. Kyoto will destroy hundreds of thousands of jobs and damage the Canadian economy – the U.S., Canada’s biggest trading partner, will not ratify Kyoto, and developing countries are exempt.
5. Kyoto will actually hurt the global environment – it will cause energy-intensive industries to move to exempted developing countries that do not control even the worst forms of pollution.
6. Kyoto’s CO2 credit trading scheme punishes the most energy efficient countries and rewards the most wasteful. Due to the strange rules of Kyoto, Canada will pay the former Soviet Union billions of dollars per year for CO2 credits.
7. Kyoto will be ineffective – even assuming the overstated pro-Kyoto science is correct, Kyoto will reduce projected warming insignificantly, and it would take as many as 40 such treaties to stop alleged global warming.
8. The ultimate agenda of pro-Kyoto advocates is to eliminate fossil fuels, but this would result in a catastrophic shortfall in global energy supply – the wasteful, inefficient energy solutions proposed by Kyoto advocates simply cannot replace fossil fuels.
*******************
My country Canada was foolish enough to sign the Kyoto Protocol but then was wise enough to largely ignore it. Those Canadian provinces that did adopt the policies of global warming mania, like Ontario, are now paying a very heavy price for their foolishness.
In the aforementioned debate, the Pembina Institute rejected our position through an appeal to the authority of the IPCC, which had NO successful predictive track record at that time and STILL has NO successful predictive record.
I suggest that in science, one’s predictive record is perhaps the ONLY objective measure of one’s competence, and as such, the IPCC has completely failed.
Any simulation whose predictions are verifiable using data should be validated to the extent possible. Every missile simulation starts out being a best guess as to aerodynamics, propulsion and guidance. But guess what happens? The missile is flight tested. Discrepancies between the simulation and performance are then resolved. Errors in models are corrected.
This is the part of the simulation-prediction game that Mr. Mosher seems to be very unfamilar with. Even to the point of discomfort.
PAMELA GRAY
You said “In reality there is absolutely no money or rewards in modeling natural variability. No grants. No media coverage, no journal articles, and no chance at a Nobel Peace or Science Prize.”
Well said . As recently reported on GWPF web page , The world is spending $1billion per day to tackle global warming ” With all this free money going around , who would want to stop this. Certainly not those getting all this free money . So they will avoid reporting the truth or publishing correct model outputs as it will turn off the money tap. Meanwhile the globe will go its own way as it always has done and most likely cooling will set in for next many decades regardless what money is collected or spent.
BrianH writes ” I wonder what digital drugs they’re being fed.”
Possibly digitalias?
I wonder how this model does pre-1970? If a model is able to predict historical data over several decades, it might still be useful even if it fails to predict the last one or two. Although a 3 sigma deviation is definitely bad, if the model successfully predicts the past few decades it might suggest that there is something in the recent measurements that it isn’t properly taking into account, and not necessarily that the model is complete garbage.
“The best fit linear trend lines (not shown) of the model mean and all datasets are set to zero at 1979, which is the first year of the satellite data. We believe this is the best method for comparing model results to observations.”
Forgive me if this is obvious, but I’m not entirely sure if this is well justified. Is this the standard way that the base lines of these models are lined up?
I just stumbled across this article, and I think it’s an interesting analysis. But, since a lot of the commentary here seems to be on the anti-AGW side of things it just makes me feel a little skeptical. Has anyone else had any doubts about this, and then convinced themselves that those doubts were not well founded? Because that kind of scrutiny is what I’m really interested in, and yet seems so hard to find on either side of the argument.
The one thing that strikes me about all these studies is that AGW has changed from being a science to becoming a belief, because any evidence that AGW is not happening is totally disregarded or some illogical explanation that disregards the laws of physics is trotted out (the missing heat has disappeared into the ocean depths!).
I find this frightening, because science is capable of change, beliefs are not!
Maybe the missing heat is on the moon. Did Weaver check there?
Google ‘conenssti energy’ to discover what has driven average global temperature since 1610. Follow a link in that paper to a paper that gives an equation that calculates average global temperatures with 90% accuracy since before 1900 using only one external forcing. Carbon dioxide change has no significant influence. The average global temperature trend is down.
Josef Rejano says: October 25, 2013 at 10:15 pm
Why don’t you think this method of comparing model results to observations is well justified? If you criticize my method, why don’t you gives reasons that another method is better?
The absolute values of temperatures in both models and observations is very uncertain; only the relative changes in temperatures are meaningful, so we have to use anomalies. I am comparing the rate of warming, or the trends of temperatures between observations and the models. A different choice of comparisons would change the constant added to the curves, but would have no affect on the trend comparisons. It would just change the vertical position of the model curves relative to the observations in the graphs.
I choose the method suggested by Dr. Chrisy and Dr. Spencer in the post “STILL Epic Fail: 73 Climate Models vs. Measurements” here:
http://www.drroyspencer.com/2013/06/still-epic-fail-73-climate-models-vs-measurements-running-5-year-means/
Dr. Spencer states;
An alternative method would be to set the average values of a short period of time of model and observations to the same value. The satellite data starts in 1979, so we could have set the average of 1979 to 1984 values to zero in the graphs. Using a longer period would made the match of surface and radiosonde observation to models much worst in the period before 1979. The short 6-year period 1979 to 1984 might, by chance, be on the high or low side of the longer term trend of either the model or the observations, creating a biased comparison. In conclusion, our method of making the intercepts of the tends all match at 1979 is the best method of trend comparisons for presentation in the graphs.
OK good, Canadian models are particularly bad, but can we get excited about 1 area in a big world ?Could a dramagreen simply cherry pick other places to find where models worked ? In the whole world there must be 1 or 2
– what do you think ?
stewgreen:
At October 27, 2013 at 4:48 am you ask
I think you have provided a clear example of the Texas Sharpshooter fallacy
https://yourlogicalfallacyis.com/the-texas-sharpshooter
The IPCC AR5 provides a spaghetti graph of 95 computer model projections. All except three of the projections are obviously wrong and, therefore, it is tempting to keep those three and to reject the 92 obviously wrong ones.
That temptation is an error. All of the models adopt similar modelling principles but operate different assumptions, parametrisations (i.e. guesses), etc.. And it could be expected that three or more of the 95 models would have matched the past by pure chance. So, there is no reason to suppose that any of the models can project the future.
The correct action is either
(a) to examine each of all the models because that may provide information about the models
or
(b) to reject all of the models because it is decided that they are flawed in principle so an alternative modelling method is required.
Rchard
Late to the party but I am confused. At what point did we stop using 30 year rolling averages? Wasn’t 30 years the magical number? If so, shouldn’t we have a starting point somewhere around 1983?
Or is the starting point rather more decidedly chosen for a particular characteristic, say the low point following cooling from the mid 1940s or so until now? So if we can arbitrarily pick a starting point to show “data” I would like to see them show everything based on a start point around 1925. Wouldn’t that make the overall picture a little bit different?
buggs:
At October 28, 2013 at 12:34 pm you ask
Yes, you are “confused” and your confusion was deliberately invoked by warmunists in promotion of the AGW-scare.
There is and was no justification for using “30 year rolling averages” and “30 years” was falsely asserted to be a “magical number” so there is no justifiable reason to suggest “a starting point somewhere around 1983”. I explain this as follows.
As part of the International Geophysical Year (IGY), in 1958 it was decided that 30 years would be the standard period for comparison of climatological data. The period of 30 years was a purely arbitrary choice and was selected on the basis that it was thought there was then sufficient data for global data to be compiled for only the previous 30 years.
A 30 year standard period is NOT a time for determining a climatological datum. It is a time for obtaining an average against which a climatological datum can be compared. So, for example, HadCRU and GISS each provide a climatological datum of mean global temperature for a single year and present it as a difference (i.e. an anomaly) from the average mean global temperature of a 30 year period. But they use different 30 year periods to obtain the difference.
The arbitrary choice of 30 years is unfortunate for several reasons. For example, it is not a multiple of the ~11 year solar cycle, or the ~22 year Hale cycle, or the ~60 year AMO, or etc.. But in 1958 it was thought that 30 years was the longest available period against which to make comparisons so 30 years was adopted as the standard period.
And that is the ONLY true relevance of 30 years for climate data.
Indeed, the IPCC did not use 30 years as a basis for anything except as the ‘climate normal’ for comparison purposes. Hence, for example, in its 1994 Report the IPCC used 4 year periods to determine if there were changes in hurricane frequency.
The satellite data for global temperature began to be recorded in 1979. And it showed little change to global temperature with time after nearly 20 years. So, alarmists started to promote the false idea that data had to be compiled over 30 years for it to be meaningful and, therefore, – they said – the satellite data should be ignored. But the satellite data had two effects: (a) it constrained the amount of global warming which e.g. HadCRU and GISS could report since 1979, and (b) as 30 years and more of satellite data were obtained the alarmists lost ability to refute the validity of the satellite data.
Richard