Gosh, you’d think they’d check the data before issuing a statement like this (press release follows).
It [CO2] was responsible for 85% of the increase in radiative forcing – the warming effect on our climate – over the decade 2002-2012. Between 1990 and 2013 there was a 34% increase in radiative forcing because of greenhouse gases, according to the latest figures from the U.S. National Oceanic and Atmospheric Administration (NOAA).
But, the temperature data tells an entirely different story, look at this plot of all global temperature metrics and trends from 2002-2012 – there’s no warming to be seen!
In fact, with the exception of UAH, which is essentially flat for the period, the other metrics all show a slight cooling trend.
Plot from Woodfortrees.org – source:
http://www.woodfortrees.org/plot/hadcrut3vgl/from:2002/to:2012/plot/hadcrut3vgl/from:2002/to:2012/trend/plot/gistemp/from:2002/to:2012/plot/gistemp/from:2002/to:2012/trend/plot/rss/from:2002/to:2012/plot/rss/from:2002/to:2012/trend/plot/uah/from:2002/to:2012/plot/uah/from:2002/to:2012/trend/plot/best/from:2002/to:2012/plot/best/from:2002/to:2012/trend
UPDATE: here is the same graph as above, but with CO2 increase (a proxy for forcing) added. Clearly, global temperature does not follow the same trend.
Plot from Woodfortrees.org – source:
From the World Meteorological Organization – Press Release No. 991 (h/t to Steve Milloy, emphasis mine)
CO2 concentrations top 400 parts per million throughout northern hemisphere
Geneva, 26 May 2014 (WMO) – For the first time, monthly concentrations of carbon dioxide (CO2) in the atmosphere topped 400 parts per million (ppm) in April throughout the northern hemisphere. This threshold is of symbolic and scientific significance and reinforces evidence that the burning of fossil fuels and other human activities are responsible for the continuing increase in heat-trapping greenhouse gases warming our planet.
All the northern hemisphere monitoring stations forming the World Meteorological Organization (WMO) Global Atmosphere Watch network reported record atmospheric CO2 concentrations during the seasonal maximum. This occurs early in the northern hemisphere spring before vegetation growth absorbs CO2.
Whilst the spring maximum values in the northern hemisphere have already crossed the 400 ppm level, the global annual average CO2 concentration is set to cross this threshold in 2015 or 2016.
“This should serve as yet another wakeup call about the constantly rising levels of greenhouse gases which are driving climate change. If we are to preserve our planet for future generations, we need urgent action to curb new emissions of these heat trapping gases,” said WMO Secretary-General Michel Jarraud. “Time is running out.”
CO2 remains in the atmosphere for hundreds of years. Its lifespan in the oceans is even longer. It is the single most important greenhouse gas emitted by human activities. It was responsible for 85% of the increase in radiative forcing – the warming effect on our climate – over the decade 2002-2012.
Between 1990 and 2013 there was a 34% increase in radiative forcing because of greenhouse gases, according to the latest figures from the U.S. National Oceanic and Atmospheric Administration (NOAA).
According to WMO’s Greenhouse Gas Bulletin, the amount of CO2 in the atmosphere reached 393.1 parts per million in 2012, or 141% of the pre-industrial level of 278 parts per million. The amount of CO2 in the atmosphere has increased on average by 2 parts per million per year for the past 10 years.
Since 2012, all monitoring stations in the Arctic have recorded average monthly CO2 concentrations in spring above 400 ppm, according to data received from Global Atmosphere Watch stations in Canada, the United States of America, Norway and Finland.
This trend has now spread to observing stations at lower latitudes. WMO’s global observing stations in Cape Verde, Germany, Ireland, Japan, Spain (Tenerife) and Switzerland all reported monthly mean concentrations above 400 ppm in both March and April.
In April, the monthly mean concentration of carbon dioxide in the atmosphere passed 401.3 at Mauna Loa, Hawaii, according to NOAA. In 2013 this threshold was only passed on a couple of days. Mauna Loa is the oldest continuous CO2 atmospheric measurement station in the world (since 1958) and so is widely regarded as a benchmark site in the Global Atmosphere Watch.
The northern hemisphere has more anthropogenic sources of CO2 than the southern hemisphere. The biosphere also controls the seasonal cycle. The seasonal minimum of CO2 is in summer, when substantial uptake by plants takes place. The winter-spring peak is due to the lack of biospheric uptake, and increased sources related to decomposition of organic material, as well as anthropogenic emissions. The most pronounced seasonal cycle is therefore in the far north.
The WMO Global Atmosphere Watch coordinates observations of CO2 and other heat-trapping gases like methane and nitrous oxide in the atmosphere to ensure that measurements around the world are standardized and can be compared to each other. The network spans more than 50 countries including stations high in the Alps, Andes and Himalayas, as well as in the Arctic, Antarctic and in the far South Pacific. All stations are situated in unpolluted locations, although some are more influenced by the biosphere and anthropogenic sources (linked to human activities) than others.
The monthly mean concentrations are calculated on the basis of continuous measurements. There are about 130 stations that measure CO2 worldwide.
A summary of current climate change findings and figures is available here
|
Preliminary CO2 mole fractions at the GAW global stations (March 2014; April 2014)
|
![]() * data are filtered for clean sector ** only night-time values are used to calculate monthly mean |
| Legend and data courtesy:ALT: Alert, Canada, 82.50°N, 62.34°W, 210 m a.s.l. (Environment Canada, Canada)AMS: Amsterdam Island, France, 37.80°S, 77.54°E, 70 m a.s.l. (Research program “SNO ICOS-France” led by LSCE/OVSQ (CEA, INSU))BRW: Barrow (AK), USA, 71.32°N, 156.6°W, 11 ma.s.l. (NOAA, USA)CNM: Monte Cimone, Italy, 44.17°N, 10.68°E, 2165 m a.s.l. (Italian Air Force Mountain Centre – Mt. Cimone, Italy)CVO: Cape Verde Atmospheric Observatory, Cape Verde, 16.86°N, 24.87°W, 10 m a.s.l. (Max Planck Institute for Biogeochemistry, Jena, Germany)
HPB: Hohenpeissenberg, Germany, 47.80°N, 11.01°E, 985 m a.sl. (Deutscher Wetterdienst (DWD), Germany) IZO: Izaña (Tenerife), Spain, 28.31°N, 16.50°W, 2373 m a.s.l. (Agencia Estatal De Meteorología (Aemet), Spain) JFJ: Jungfraujoch, Switzerland, 46.55°N, 7.99°E, 3580 m a.s.l. (Empa, Switzerland) MHD: Mace Head, Ireland, 53.33°N, 9.90°W, 5 m a.s.l. (Research program “SNO ICOS-France” led by LSCE/OVSQ (CEA, INSU), in collaboration with EPA, Ireland) MLO: Mauna Loa (HI), USA, 19.54°N, 155.6°W, 3397 m a.s.l. (NOAA, USA) MNM: Minamitorishima, Japan, 24.29°N, 154.0°E, 8 m a.s.l. (Japan Meteorological Agency, Japan) PAL: Pallas, Finland, 67.97°N, 24.12°E, 560 m a.s.l. (Finish meteorological Institute (FMI), Finland) SMO: Samoa (Cape Matatula), USA, 14.25°S, 170.6°W, 77 m a.s.l. (NOAA, USA) SPO: South Pole, Antarctica, 90.00°S, 24.80°W, 2841 m a.s.l. (NOAA, USA) ZEP: Zeppelin Mountain (Ny Ålesund), Norway, 78.91°N, 11.89°E, 474 m a.s.l. (Norwegian Institute for Air Research, Norway |
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.



ferdberple says: May 29, 2014 at 4:13 pm
“Ferdinand Engelbeen says:
May 29, 2014 at 9:15 am
Not a surprise that Kaufmann (not a skeptic at all) didn’t find an editor willing to publish his work…
=================
yet, from a quick look the mathematics is sound in its approach. and it is certainly significant in its contribution to new knowledge.”
Well, it’s not new knowledge. It’s an unpublished 2004 paper, and the GCM runs it describes are from last century.
And I think FE is thinking of a different Kaufman. These guys are economists.
NOAA’s Green House Gas ground zero scientists ran a test themselves
for 14 years between 1996 and 2010, the amount of radiative forcing at ground zero for Mann Maid Glow Bull Warming, the North American Great Plains.
They found
using instruments and a timeframe selected themselves,
less radiative forcing than when they started checking.
Three quarters of a million checks ( actually 800,000) and fourteen years before: starting two years before 1998 and all through this recent hottest decade?
“A trend analysis was applied to a 14-yr time series of downwelling spectral infrared radiance
observations from the Atmospheric Emitted Radiance Interferometer (AERI) located at the
Atmospheric Radiation Measurement Program (ARM) site in the U.S. Southern Great Plains. The
highly accurate calibration of the AERI instrument, performed every 10 min, ensures that any
statistically significant trend in the observed data over this time can be attributed to changes in the
atmospheric properties and composition, and not to changes in the sensitivity or responsivity of
the instrument. The measured infrared spectra, numbering more than 800 000, were classified as
clear-sky, thin cloud, and thick cloud scenes using a neural network method.
The AERI data record demonstrates that the downwelling infrared radiance is decreasing over this 14-yr period in the winter, summer, and autumn seasons but it is increasing in the spring; these trends are statistically significant and are primarily due to long-term change in the cloudiness above the site.
Take a look at the organization campuses that secured funding for and did this study. It’s done by NOAA scientists. Not skeptics of the story,
but the authors of it. That is important to realize.
http://journals.ametsoc.org/doi/abs/10.1175/2011JCLI4210.1?journalCode=clim
—————————-
“CO2 remains in the atmosphere for hundreds of years. Its lifespan in the oceans is even longer. It is the single most important greenhouse gas emitted by human activities. It was responsible for 85% of the increase in radiative forcing – the warming effect on our climate – over the decade 2002-2012.
Between 1990 and 2013 there was a 34% increase in radiative forcing because of greenhouse gases, according to the latest figures from the U.S. National Oceanic and Atmospheric Administration (NOAA).”
Nick Stokes says:
May 29, 2014 at 4:20 pm
These guys are economists.
=============
Yes, most of the mathematics of analyzing forecasts has come from economics. Climate Science is yet to develop anything comparable.
Which is unfortunate because this would allow Climate Science to eliminate those models that are faulty. The problem for the IPCC is that they have no way to judge which models are better than others, so they need to include them all. By failing to learn and apply the techniques of other disciplines they are doomed to repeat their mistakes.
This isn’t only an IPCC problem, it is a Climate Science problem. Yet economics already knows how to evaluate models. If a model has no more skill than its inputs it is a worthless model and should be rejected, because it delivers no value. Except of course to bamboozle those dishing out the grant monies.
ps: There were no computerized climate models a century ago.
Nick Stokes says:
May 29, 2014 at 4:20 pm
Well, it’s not new knowledge.
==============
Has there been a more recent study of climate model forecasts? Have the models been categorized by the nature of their error? Have the model skill levels been analyzed to confirm that they exceeds that of the inputs? Please provide the links, thank.
Nick Stokes says:
May 29, 2014 at 4:20 pm
Nick, it is the same Kaufmann who complained at RealClimate some years ago that his work was not accepted by the climate community:
http://www.realclimate.org/index.php/archives/2005/12/naturally-trendy/comment-page-2/#comment-6801
While an economist, he did see the same problems as occured with the large multivariable programs once used for economics which all failed, while the simple programs with less variables approached to what really happened…
Despite that, even in 2011 he tried to explain the “pause” with a combination of more human SO2/aerosol emissions (which is not true globally) and natural variability:
http://www.pnas.org/content/108/29/11790.abstract
About the skills of the models:
Here a comparison between the temperature trend (RSS MSU lower troposphere) since 1996.8, with the longest “pause” of all trends and the temperature increase caused by CO2 over the same time frame, asssuming no feedbacks (0.9 C for 2xCO2 according to Modtran). And here the same for the average of the range of 1.5-4.5 C according to the IPCC, thus 3 C for 2xCO2.
It may be clear that all models assuming a high sensitivity beyond 3 C for 2xCO2 are out of reality.
There is a small error in the plots above, as the trendline for the T-CO2 relationship should be logarithmic, but here linear,as no other ratio is possible in Wood for Trees. But the deviation is small.
Ferdinand Engelbeen says: May 30, 2014 at 8:02 am
Yes, my mistake. I had forgotten that he was also the author of the 2011 paper.
That is actually relevant to this thread, as he says that, indeed, CO2 forcing increased during the period, but was balanced by a rise in negative sulfate forcing.
“It may be clear that all models assuming a high sensitivity…”
My old complaint – models do not assume a sensitivity. But the logic of Kaufmann’s paper says that it isn’t a sensitivity matter. Total forcing did not increase, so no temperature rise is expected from forcing, whatever the sensitivity.
Nick Stokes says:
May 30, 2014 at 1:50 pm
Except that the increase in SO2 doesn’t exist: the increase in SE Asia is mostly balanced with the decrease in Western Europe and North America… And the brown aerosols over India are warming, not cooling… Moreover, the alleged influence of SO2 is far overblown in a lot of models, which leads to the huge differences in sensitivity, which is mainly the result of the CO2 – SO2 tandem. But if SO2 is overblown, then there is an increase in forcing without any result in temperature…
While an economist, he did see the same problems as occurred with the large multivariable programs once used for economics which all failed, while the simple programs with less variables approached to what really happened…
=================
Anyone with an applied mathematics background in computers knows why this happens. Even the most rudimentary of linear techniques suffers from this problem.
Remember your linear programming from high school? You tried to solve for (x,y,z) using 3 linear equations. The object was to get all 1’s on the diagonal of the matrix, and 0’s elsewhere. The values for your unknowns would be to the right of the “=” sign.
But guess what happens when you try this on a computer. Even when programmed perfectly, computers are not exact. They have a small round-off error. And even with only 3 linear parameters this round off error results in errors in the results.
And this is a trivial linear problem. As you add more parameters, the errors increase, typically in a non-linear fashion. This problem not only applies to the round-off in the computer, it also applies to measurement errors in the input parameters. As you add parameters, the errors overwhelm the results.
And heaven help you if you try and go beyond a linear model. Non linear models are so sensitive to small errors as to be largely unsolvable. Sure you can solve them, but the results are unreliable.
Thus it has been found that simple models typically outperform complex models, because they are better able to control the size of the error.
Unfortunately, climate science doesn’t like this answer, so they try and sweep it under the table. They argue that Economists are not Climate Scientists. But the reality is that Economist have been computer forecasting for a lot longer than Climate Science and they have a large body of experience as how to evaluate and eliminate faulty models. Climate Science has no such body of theory or experience.
My old complaint – models do not assume a sensitivity.
=================
The model assumptions are in the parameters. So for example, if one model uses a high value for SO2, and then trains the model using historical data, this will result in a high CO2 sensitivity inferred in the model.
Similarly, if one model uses a low value for SO2, and then trains the model using historical data, this will result in a low CO2 sensitivity in the model.
This training can either be computer driven – the computer adjusts its weights for each parameter to optimize the fit. Or the humans manually adjust the weights for the parameters to optimize the fit. In both cases it is still training.
Thus, the notion that models do not assume a sensitivity is correct. This sensitivity is determined by the parameters that the humans set for the model.
The fallacy of climate models is shown by the models themselves. Take a single climate model. Small changes in the inputs, much smaller than the measurement error, result in large changes in the outputs. Two virtually identical runs of the same model result in much different forecasts.
What the models are telling us is that either of these events is possible. Temps could go up, or they might not. No matter what we do about CO2. Again Climate Science doesn’t like this answer, so they average the two runs for this single model together, and call this the forecast.
But this is mathematically incorrect and misleading to the point of scientific fraud. The model is showing that there is HIGH VARIABILITY. And this variability is not due to the forcings, because it happens with very little change in the forcings. Thus, the variability is inherent in the climate system, but it is hidden by the process of averaging.
This HIGH VARIABILITY is hidden from the scientific community by the process of averaging, and as a result the IPCC has been mislead to report that there is low natural variability. But the climate models themselves are telling us that variability is high.