Modeling in the red

From an Ohio State University press release where they see a lot of red, and little else, yet another warm certainty model:

STATISTICAL ANALYSIS PROJECTS FUTURE TEMPERATURES IN NORTH AMERICA

Upper-left panel: The posterior mean of the average temperature-change projections. Upper-right panel: The posterior standard deviation of the average temperature-change projections. Lower panels: Pixelwise posterior 2.5th (lower-left) and 97.5th (lower-right) percentiles of the average temperature-change projections. Units for all panels are in °C. [Source: Kang and Cressie (2012)]

COLUMBUS, Ohio – For the first time, researchers have been able to combine different climate models using spatial statistics – to project future seasonal temperature changes in regions across North America.

They performed advanced statistical analysis on two different North American regional climate models and were able to estimate projections of temperature changes for the years 2041 to 2070, as well as the certainty of those projections.

The analysis, developed by statisticians at Ohio State University, examines groups of regional climate models, finds the commonalities between them, and determines how much weight each individual climate projection should get in a consensus climate estimate.

Through maps on the statisticians’ website, people can see how their own region’s temperature will likely change by 2070 – overall, and for individual seasons of the year.

Given the complexity and variety of climate models produced by different research groups around the world, there is a need for a tool that can analyze groups of them together, explained Noel Cressie, professor of statistics and director of Ohio State’s Program in Spatial Statistics and Environmental Statistics.

Cressie and former graduate student Emily Kang, now at the University of Cincinnati, present the statistical analysis in a paper published in the International Journal of Applied Earth Observation and Geoinformation.

“One of the criticisms from climate-change skeptics is that different climate models give different results, so they argue that they don’t know what to believe,” he said. “We wanted to develop a way to determine the likelihood of different outcomes, and combine them into a consensus climate projection. We show that there are shared conclusions upon which scientists can agree with some certainty, and we are able to statistically quantify that certainty.”

For their initial analysis, Cressie and Kang chose to combine two regional climate models developed for the North American Regional Climate Change Assessment Program. Though the models produced a wide variety of climate variables, the researchers focused on temperatures during a 100-year period: first, the climate models’ temperature values from 1971 to 2000, and then the climate models’ temperature values projected for 2041 to 2070. The data were broken down into blocks of area 50 kilometers (about 30 miles) on a side, throughout North America.

Averaging the results over those individual blocks, Cressie and Kang’s statistical analysis estimated that average land temperatures across North America will rise around 2.5 degrees Celsius (4.5 degrees Fahrenheit) by 2070. That result is in agreement with the findings of the United Nations Intergovernmental Panel on Climate Change, which suggest that under the same emissions scenario as used by NARCCAP, global average temperatures will rise 2.4 degrees Celsius (4.3 degrees Fahrenheit) by 2070. Cressie and Kang’s analysis is for North America – and not only estimates average land temperature rise, but regional temperature rise for all four seasons of the year.

Cressie cautioned that this first study is based on a combination of a small number of models. Nevertheless, he continued, the statistical computations are scalable to a larger number of models. The study shows that climate models can indeed be combined to achieve consensus, and the certainty of that consensus can be quantified.

The statistical analysis could be used to combine climate models from any region in the world, though, he added, it would require an expert spatial statistician to modify the analysis for other settings.

The key is a special combination of statistical analysis methods that Cressie pioneered, which use spatial statistical models in what researchers call Bayesian hierarchical statistical analyses.

“We show that there are shared conclusions upon which scientists can agree with some certainty, and we are able to statistically quantify that certainty.”

The latter techniques come from Bayesian statistics, which allows researchers to quantify the certainty associated with any particular model outcome. All data sources and models are more or less certain, Cressie explained, and it is the quantification of these certainties that are the building blocks of a Bayesian analysis.

In the case of the two North American regional climate models, his Bayesian analysis technique was able to give a range of possible temperature changes that includes the true temperature change with 95 percent probability.

After producing average maps for all of North America, the researchers took their analysis a step further and examined temperature changes for the four seasons. On their website, they show those seasonal changes for regions in the Hudson Bay, the Great Lakes, the Midwest, and the Rocky Mountains.

In the future, the region in the Hudson Bay will likely experience larger temperature swings than the others, they found.

That Canadian region in the northeast part of the continent is likely to experience the biggest change over the winter months, with temperatures estimated to rise an average of about 6 degrees Celsius (10.7 degrees Fahrenheit) – possibly because ice reflects less energy away from the Earth’s surface as it melts. Hudson Bay summers, on the other hand, are estimated to experience only an increase of about 1.2 degrees Celsius (2.1 degrees Fahrenheit).

According to the researchers’ statistical analysis, the Midwest and Great Lakes regions will experience a rise in temperature of about 2.8 degrees Celsius (5 degrees Fahrenheit), regardless of season. The Rocky Mountains region shows greater projected increases in the summer (about 3.5 degrees Celsius, or 6.3 degrees Fahrenheit) than in the winter (about 2.3 degrees Celsius, or 4.1 degrees Fahrenheit).

In the future, the researchers could consider other climate variables in their analysis, such as precipitation.

This research was supported by NASA’s Earth Science Technology Office. The North American Regional Climate Change Assessment Program is funded by the National Science Foundation, the U.S. Department of Energy, the National Oceanic and Atmospheric Administration, and the U.S. Environmental Protection Agency office of Research and Development.

###

 

About these ads

95 thoughts on “Modeling in the red

  1. “One of the criticisms from climate-change skeptics is that different climate models give different results, so they argue that they don’t know what to believe,” he said.

    I have never seen anyone make that statement — did Cressie just transfer from ANU, or did he think that one up all on his own?

  2. Looking at the groups that supported this “research” I am not surprised by the conclusions.

  3. How far into the future is this model? If it’s August, they might be right this time.

  4. “We show that there are shared conclusions upon which scientists can agree with some certainty, and we are able to statistically quantify that certainty.”

    Huh?

    Hudson Bay summers, on the other hand, are estimated to experience only an increase of about 1.2 degrees Celsius (2.1 degrees Fahrenheit).

    After quantifying the certainty they merely estimate that Hudson Bay will do their bidding.

  5. There is much overlap with this work and the prediction of national and regional economic changes. If these guys are smart enough to derive fine detail from coarse models, it should be a cinch to work in economics and become very wealthy very quickly.
    I remain totally unconvinced, if for no other reason that many past temperature data bases are so corrupt that one cannot have confidence in the future. As a test, would any of these authors like to wager large personal sums of $ on the predictions?
    I thought not.

  6. “That Canadian region in the northeast part of the continent is likely to experience the biggest change over the winter months, with temperatures estimated to rise an average of about 6 degrees Celsius”…

    Six degrees over 55 years? Big deal. The temperature rose from 7°C this morning to 26°C this afternoon.

    Mine’s bigger.

  7. “All data sources and models are more or less certain, Cressie explained, and it is the quantification of these certainties that are the building blocks of a Bayesian analysis.” ?!
    From

    http://mathworld.wolfram.com/BayesianAnalysis.html

    “Bayesian analysis is somewhat controversial because the validity of the result depends on how valid the prior distribution is, and this cannot be assessed statistically.”

    Clearly, an invalid a priori assumption of perfectly certain data sources and models always leads to invalidly “certain” Bayesian results. In other words, GIGO.

  8. Unless I am missing something, it appears that they assume the feedbacks due to a doubling of CO2 are positive and not negative. However if the last 10 to 15 years are any indication, and if Dr. Spencer’s views on negative feedback are true, then it seems as if they made a wrong assumption right off the bat and everything else they may say would be wrong.

  9. Let’s take some models that have been shown to have no predictive skill what so ever, combine them, and make still more predictions. Yeah, that ought to get a Nobel prize…

    I used to make snarky sarcastic remarks about shoddy science, but the drivel of late has descended to a level of absurdity such that mocking it is pointless.

  10. Though the models produced a wide variety of climate variables, the researchers focused on temperatures during a 100-year period: first, the climate models’ temperature values from 1971 to 2000, and then the climate models’ temperature values projected for 2041 to 2070.

    As Bill Tuttles remarks, these people have no clue on what sceptics have issues with.

    But as the quote shows here, they cherry pick a time when temperatures went up and CO2 went up to show their case. Why don’t they look at either the ENTIRE time period of say 1950 – today instead of cherry picking the time when CO2 levels and temperatures both went up?

    The answer of course which most of us realize as sceptics ….. is that the only time period from 1950-present that fits the meme of CO2 causes it to warm is this shortened time period whereas anything outside of that shows either cooling or stagnant temperatures. Instead of showing the truth and the entire picture, they remain fixated on their goal of showing a pre-determined outcome. That is what sceptics have an issue with first and foremost.

    Of course, other issues in the models (GCM’s if you will) come to mind including assumed values for positive feedbacks on CO2 (or other greenhouse gases). They fix these in the GCM’s by assuming that magically other anthropological influences were present when temperatures did not cooperate with their pre-determined conclusions……and it all comes back to the problems that no one understands exactly why clouds to this day change. And so the models are all based on cherry picking this very same time period of 1970-2000 as a “golden standard” because it MUST be the time period that warming was caused by CO2.

    Its such a large logical fallacy, that I don’t know even how to tell people how retarded it is. No sceptic has a problem with computer models off the bat, or that models disagree with one another, the problems stem from one common source, this predetermined outcome always surfaces and no matter what study you find, they will compare changes seen in this time period with the future as they tell us will warm up the same way.

    Why is this 30 year time period so special? Why is it that 30 years is the climate golden standard when we see a sin wave of temperatures on a 60 year cycle……?

    These facts just boggle the mind because the truth is easy to determine. Sure, most sceptics agree that we warmed over the 20th century. But the effect of CO2 on this is the only point we have a contention at. And yet, every study they put out never goes to the heart of this issue.

    So yes, yet another worthless study. And another scientist who completely misunderstands the sceptical argument from the get-go and refuses to leave his or her echo chamber and see the forest through the trees.

  11. These regional climate model runs are based on some scenarios for CO2 levels in the future. My question is somewhat different: How many (or which) regional weather forecasting models base their outputs on current CO2 levels?

  12. So they can predict with statistical certainty how computer climate models will predict the climate. When this technique is perfected, it might be more usefully applied to video games. Imagine knowing how Final Fantasy XII or Super Mario Bros will turn out before you begin.

    “All data sources and models are more or less certain, Cressie explained…”
    Are we more certain of HadCRUT3 than HadCRUT2, or 4, or USHCN more than USHCN v2?
    Climate models are more sensitive to initial conditions than to CO2, and more sensitive to CO2 than reality. Perfect application of fuzzy logic to fuzzy thinking.

  13. “One of the criticisms from climate-change skeptics is that different climate models give different results, so they argue that they don’t know what to believe,”

    Solution: Garbage In – Weighted Average Garbage Out.

  14. They need the people to agree with them. All the sheep for slaughter have to walk willingly into the pen. But the sheep are hesitating at the gate, so more scare tactics are needed to herd them through. That’s not working, so it’s back to rational and consensus. If the people aren’t scared enough to accept the need for global control, then, then, then, why it might just all fall apart!

  15. So we take a bunch of models that can’t replicate current global temperature trends and combine them. Well if some were above the actual trend and some were below we might end up with something in the middle. Sadly all of these models are over-predicting temperature trends so I can have a pretty high confidence that this will be wrong too.

  16. I have copied and pasted this from Christopher Booker’s excellent weekly column in the Sunday Telegraph.
    “Someone whom I was delighted to meet again in Australia was Professor Ian Plimer, a prominent “climate sceptic”, who is one of Abbott’s advisers. In his latest entertaining book, How To Get Expelled From School (by asking the teachers 101 awkward scientific questions about their belief in global warming), Plimer cites a vivid illustration of how great is the threat posed to the planet by man-made CO2.
    If one imagines a length of the Earth’s atmosphere one kilometre long, 780 metres of this are made up of nitrogen, 210 are oxygen and 10 metres are water vapour (the largest greenhouse gas). Just 0.38 of a metre is carbon dioxide, to which human emissions contribute one millimetre.”

    This is why I am sceptical and smile to myself as these climate “scientists” constantly run around in metaphorical circles trying to prove that their computer models are right, when clearly they are not!

  17. These folks are brilliant… uhm… religious! CO2 the newest false god. We cannot disprove their models for decades. But we should act prudently just in case they are right! The IPCC’s models continue to be wrong, yet the AGW believers still believe. Another sad day for people disguising themselves as scientists.

  18. This is all Bayesian statistics. This started out as a technique to formalize “soft” data, e. g. the opinions of knowledgeable persons. This is the “prior” which is then modified by the actual data (in this case, not real data, but modelling results) and the result is the “posterior”. So this is a guess updated by another guess and ending up as shiny new consensus climate predictions.
    Bayesian statistics, properly used, is a legitimate technique, but its popularity in climate science is probably due to the fact that you can get essentially any result you want by a judicious choice of prior. Nor is there any objective method for evaluating the validity of the prior, and you can go back an change the prior any number of times without having to tell anybody.

  19. Seems quite simple to me – no matter how many variable are included, how many million lines of code, no matter how airflows and ocean currents are taken into account, if the effect of increasing carbon dioxide is predicated as a ‘nett energy gain to the system';

    Guess what, when you run that model the output WILL tell you that the system will get warmer.

  20. Bill Tuttle says:
    May 15, 2012 at 9:11 pm
    “One of the criticisms from climate-change skeptics is that different climate models give different results, so they argue that they don’t know what to believe,” he said.

    Actually that is true.
    It’s just that the models give different results to what actually happens that causes the doubt .

    No-one cares that they disagree in many different wrong ways.

  21. Boring! Lets have some variety. It’s always warming everywhere. Red red red.

    There is an old saying “Don’t put all your eggs in one basket”.

  22. Cressie is the main man in spatial stats today, specifically spatio-temporal stats.

  23. “One of the criticisms from climate-change skeptics is that different climate models give different results, so they argue that they don’t know what to believe,” he said.

    I would feel more confident if his project was driven by his own critical perspective and that of other sceptical scientists.

  24. Can someone tell me if this is called “Guess-Laundering”? I may have just been the first to coin a phrase for these new attempts to pursued useful idiots…

  25. benfrommo says:
    May 15, 2012 at 10:40 pm
    “But as the quote shows here, they cherry pick a time when temperatures went up and CO2 went up to show their case. Why don’t they look at either the ENTIRE time period of say 1950 – today instead of cherry picking the time when CO2 levels and temperatures both went up?”

    Exactly. The approach might have merit but by ignoring the period 2000-2010 – the period with which all the models have problems – they are reducing their own work to a caricature.

    Cherrypicking the data for the validation means that the validation is worthless. During validation of a model, you should always strive to use as much data as is available.

    Furthermore, even if the validation had been done properly, this by no means proves predictive skill out to 2070. What are they thinking. If, as Mosher says, “Cressie is the main man in spatial stats today”, I would expect him to know that. If he pretends he doesn’t… well…

  26. LOL otter, I was about to say Go Bucks too! But I think they chose all the red to show their team colors. Scarlet and Grey! :)

    All joking aside, I never take those model studies seriously because they are never true. Just wish the MSM would realize how far off those things are.

  27. Steven Mosher says:

    May 16, 2012 at 12:29 am
    Cressie is the main man in spatial stats today, specifically spatio-temporal stats.

    SO WHAT ?

  28. M Courtney says:
    May 16, 2012 at 12:06 am
    A me, May 15, 2012 at 9:11 pm
    “…so they argue that they don’t know what to believe,” he said.
    Actually that is true.
    It’s just that the models give different results to what actually happens that causes the doubt.
    No-one cares that they disagree in many different wrong ways.

    Cressie’s statement means he assumes that if they can just bring all their models into agreement, the 60-watt cartoon light bulb (a CFC, naturally) over our collective heads will suddenly blink “on” and we’ll cease being so contrary — which is why he’s concentrating on getting the models to agree. Therein lies the rub:

    All data sources and models are more or less certain, Cressie explained…

    He’s saying the models are just fine — we’re saying the models are *not* fine. The models are fundamentally flawed because they have no means of replicating all the factors influencing the climate other than by inputting assumptions, and those assumptions are colored by bias, e.g., CO2 drives temperature rather than follows it, or that an increase in temperature automatically triggers an increase in water vapor. In sessence, their macro-models rely on a host of micro-models, and all of them are programmed to run on assumptions which don’t replicate reality. And here’s Cressie saying we can fix that merely by adding more models to the mix, rather than working on refining the assumptions.

    It’s not that we don’t know *which* of their climate models to believe, we don’t believe *any* of their climate models, because none of them are capable of producing either the kind of results or “high certainty” that they claim.

    The garbage is built into the models — to add multiple models and then expect filet mignon to come out is unrealistic.

  29. otter17~ You understand! Bucks, moola, $$$, cha-ching….! That’s what ‘science’ like this, is all about.

  30. “One of the criticisms from climate-change skeptics” Any chance of them of pointing out who actual claims climate does not change, or is just a standard throw away line designed not for its scientific use but for political purposes ?

    Meanwhile its the standard approach , start with base assumptions that support your views , and never mind their actual validity as this is ‘the cause ‘ that is a minor issue , then run models which tell you how bad things will get . Finish by asking for more research cash to do the same again..
    .

  31. “We show that there are shared conclusions upon which scientists can agree with some certainty, and we are able to statistically quantify that certainty.”

    Certainty?

    What they have measured is the extent scientists agree, which has nothing to do with certainty in what they believe. Only certainty (in a statistical sense) that they do believe it.

    Scientific certainty results from predictive accuracy. Something climate models are not known for.

  32. Phillip Bratby says:
    May 15, 2012 at 11:22 pm

    There’s only one word to describe this: GIGO

    One step further – GIGO + lies, damn lies + Bayesian Stats = Concensus = Certainty

    Ehem – Guess again

  33. My mother always told me that two wrongs don’t make a right, pity no-one told these guys that two wrong models don’t make a right one.

    The posterior mean of the average temperature change projections
    Posterior my arse.

  34. I see a lot of criticism above, with which I agree. Without comparing models to reality, statements such as this are meaningless.
    They performed advanced statistical analysis on two different North American regional climate models and were able to estimate projections of temperature changes for the years 2041 to 2070, as well as the certainty of those projections.

    Without a track record, there is no way estimate ‘certainty’ other than SWAG.

  35. Bloke down the pub says:
    May 16, 2012 at 2:52 am
    “The posterior mean of the average temperature change projections”
    Posterior my arse.

    Threadwinner!

  36. Steven Mosher says:
    May 16, 2012 at 12:29 am
    Cressie is the main man in spatial stats today, specifically spatio-temporal stats.

    How is he with spatio-temperature stats?

  37. Climate Science for beginners- Alarmism: models predict that by mixing different types of garbage you will make gold – even if sceptics see a larger pile of rubbish /sarc

  38. “The study shows that climate models can indeed be combined to achieve consensus, and the certainty of that consensus can be quantified.”

    *bangs head slowly and repeatedly on table*

  39. Steve

    That was my conclusion. If the had rest their models to 1980 and were able to accurately predict the US weather from 2000 to 2011, then I’d be impressed and think they might have something. Until they do that, it is GIGO and a failed exercise.

    Bill

  40. “Given the complexity and variety of climate models produced by different research groups around the world, there is a need for a tool that can analyze groups of them together,”

    Most of the models are based on the same root program—flat Earth, no nighttime, clouds none or designed to warm, ocean currents sort of, and 50 + other missing major factors—have the same fallacious flawed assumptions, and the same adjusted input.

    Now we have garbologolists studying the output of garbologists running garbology programs. What we get is confirmation that it is all garbage and they put a smelly stamp on it.

    Wonderful.

  41. Hats off to my fellow academic Buckeyes at Ohio State, and special hello to the staff at the Lantern (“How firm thy friendship!”). But a model is still only as good as its input. I guess what Noel Cressie is saying is: “This is the super-duper new and improved model! It’s not like those old, unreliable models that drove the IPCC’s decision-making process and influenced the entire AGW movement!” Sheesh. O-H!

  42. “One of the criticisms from climate-change skeptics is that different climate models give different results, so they argue that they don’t know what to believe,”

    Classic projection. As a skeptic, I’m not looking for anything to “believe” in. But as warmist modelers, that’s what it’s all about. Statistical analysis of confirmation-biased models, and models of models don’t provide any facts of reality, only fodder for a horribly biased world view and belief system. Get back to us when you have collected and analyzed actual observational data; as far as I’m concerned models are nothing more than a curiosity. Predicting the future with climate models is slightly easier than changing the past.

    I’m so sick and tired of being told we have to change or we will wither and die. We will all die soon enough anyway (I don’t plan on living much past 100), so I would actually enjoy a little warmth before that day comes.

  43. “One of the criticisms from climate-change skeptics is that different climate models give different results,…”

    Yeah but all of them ensure that there is warming – that is all you have discovered and we knew that. Even they knew that. BTW, you are probably safe from the scalpel of Steve McIntire because the the work is so insignificant.

  44. I’ve seen a pattern lately. More and more papers based on models. When you think about it this is quite clever. When the models fail, as they most certainly will, these guys just point their fingers at the modelers and say “not my fault”. They are still able to ride the gravy train while having limited personal risk.

    Of course, if you are modeler you should start feeling a little uneasy. Guess who’s going down hard.

  45. Oops, maybe I shouldn’t have used the word scalpel with all this fear among the warm society academics. It was only a metafor, I didn’t mean……

  46. Like with many other past climate models , the reality of the very recent climate does not seem t0 support future model projections . Canadian national annual temperature departures from 1961-1990 averages since 1998[or the last 14 years] show wide yearly fluctuations but the linear trend is completely flat. So are the summer and fall temperature departures , fluctuating but the linear trend is flat . Spring departures have gone negative or cooling and winter deaprtures show some rise based mostly the last few winters . Regionally , of the 11 climate regions in Canada only two areas [mostly in the high Arctic] show a rise in temperature departures namely, the Arctic Tundra, Mountains and Fiords.All other regions show declining or flat annual temperature departures. So someone will have to turn up the heat considerably to get the additional temperature rise being projected by the model . With the quiet sun being projected for the next few sun cycles and ocean cycles showing some cooling , there are likely going to be fewer climate changing strong El Ninos’s, so I anticpate no major warming for the next 2-3 decades to support the model predictions .

  47. Such math/statistical effusions are equivalent to dividing unit-1 by zero, meaning that whether 1/0 = 1 or 1/0 = 0, unit-1 is equal to zero– a contradiction meaningless on its face. Among others, Bertrand Russell made appropriately pejorative comments on that score.

  48. Temperatures in NE USA have been falling for decades. The projection of rising temperatures makes no sense. The authors are trying to make a linear projection using cyclic data. One might as well record temperatures from midnight to noon, and then use this to project temperatures at midnight. Your projection will show projected midnight temperature to be much higher than the night before.

  49. I would like to second Mario Lento’s proposition for the adoption of the phrase “guess laundering” – average a bunch of models and present the outcome as fact.

  50. ferd berple says:
    May 16, 2012 at 7:56 am

    Temperatures in NE USA have been falling for decades. The projection of rising temperatures makes no sense….
    _____________________________
    Well the temperatures are not exactly increasing in the SE either. It is 73 F today in mid North Carolina. 2004 is about two years after the cycle 23 max when the influence of the sun should be showing strongly. I counted by July tenth 43 days over ninety F for 2004 vs 26 days for 2010, and four days of 98F in 2010 vs nine days of 98F in 2004.

    So far this May we have had ONE day over ninety F (91F) vs seventeen day over ninety F in 2004 three of those days were 95F or more. At this point the five day forecast is the high temperatures will be in the seventies. Heck we only had five days so far above eighty and it doesn’t look like it will be much warmer than the eighties for the rest of the month.

    Here is the closest GISS station and you can see the down trend:

    http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425746930020&data_set=1&num_neighbors=1

  51. Toto says:
    May 15, 2012 at 11:16 pm

    How many (or which) regional weather forecasting models base their outputs on current CO2 levels?

    Correct me if I am wrong, but did the MET office not do this for several years when they predicted barbeque summers and the like? Then after several years of embarrassments, they stopped doing this.

  52. “One of the criticisms from climate-change skeptics is that different climate models can’t predict the past correctly, so they argue that they don’t know what to believe of the future,” he said.

  53. Seattle numerical weather forecaster Cliff Mass complains that the climate guys get all the big computers:

    http://cliffmass.blogspot.ca/2012/05/us-climate-versus-weather-computers.html

    “There is a vast overkill in pushing computer resources for climate prediction, while weather prediction is a very poor cousin.”
    “Furthermore, there is no better way to improve climate models than to improve weather models, since essentially they are the same. You learn about model weaknesses from daily forecast errors. “

  54. SSDD -and/ or GIGO

    Take your pick.

    Makes as much sense as polling 2 republican conventions and concluding that Mitt Romney will win in a landslide.

    • bernie1815 says:
      May 16, 2012 at 8:44 am (Edit)
      Calling Matt Briggs!

      ##########

      I’ll bet that Matt is a fan of Cressie. Cressie wrote the book, three actually, and well over
      200 referred articles. Everybody who knows R and who works in spatio temporal stats
      knows him.

      Folks would do well to understand what CRessie did before they had knee jerk reactions.

      Up until now the methods for combining climate model predictions have been pretty crude: averaging. However, if you look at the information from various hindcasts ( and how they are wrong in some ways and right in others ) that information can be used in getting a better ensemble forecast. A tighter forecast is good, for both confirmation and disconfirmation.

  55. The assumptions driving this model-based statistical analysis are, as usual, profoundly incorrect. They assume increased CO2 emissions, particularly the small fraction that is human-caused, will warm planetary temperatures to a level that is discernable or measurable. There simply is no hard evidence to support such a theory (which is exactly what it is).

    As Dr. William M. Gray, CSU’s professor emeritus of atmospheric physics, observes:

    “It is impossible for us skeptics to believe that the doubling of CO2 which causes a global average infrared (IR) radiation blockage to space ~3.7 Wm-2 for doubling of CO2 can be very much of a climate altering feature. Especially when we contrast this 3.7 Wm-2 IR blockage (from a doubling of CO2) with the much larger and continuous 342 Wm-2 average short-wave radiation impinging on the earth and the near balancing concomitant 342 Wm-2 net long-wave and solar (albedo) energy going back to space.

    “The global climate will be little affected by this small amount of 3.7 Wm-2 IR energy blockage to space due to a doubling of CO2. It is this lack of scientific believability and the large economic and social disruptions which would result if the industrial world were to switch to renewable energy that motivates us skeptics to rebel against such obvious exaggerated claims for CO2 increase.”

  56. It is impossible for us skeptics to believe that the doubling of CO2 which causes a global average infrared (IR) radiation blockage to space ~3.7 Wm-2 for doubling of CO2 can be very much of a climate altering feature.
    #######################

    1. The sun’s input to the climate system is around 1361 Watts.
    2. Small changes in that ( ~1Watt) at solar minimum, looking at the LIA, would see to
    have an effect.. right?

    Changing the forcing ( Watts ) will have an effect on the temperature. If the sun went to zero
    it would get cold. It the suns output increased it would have an effect.

    a doubling of C02 leads to additional Watts. That is beyond question. tested. observed.
    theory is correct. we use that theory to build things that work

  57. Absolute unadulterated garbage. But then predicting things is very difficult, particularly things in the future.

  58. The models should hindcast the past as a test for their forecast capabilities.
    How many of the forecasting models have been backcasting correctly the past?
    Jo Nova has a post on this:

    http://joannenova.com.au/2012/05/we-cant-predict-the-climate-on-a-local-regional-or-continental-scale/

    Why is not the ensemble analysed for 1940-1970 hindcast before forecasting 2040-2070?

    What is funny to see with these forecasting is that they forecast 2040-2070 so that no possible direct reality check of their forecasting abilities can be done in the next couple of decades.
    As already said by others I see in this red forecast only GIGO and BS and no science.

  59. Someone earlier ended their comment, “O-H”.
    I-O
    It sounds like Cressie has developed a better way to compare/combine the output of computer models in general. The problem is that when applying it to the climate models, he’s tying to make a silk purse out of a sow’s ear.

  60. Steven Mosher says:
    May 16, 2012 at 12:29 am
    Cressie is the main man in spatial stats today, specifically spatio-temporal stats.

    If the best we’ve got is producing this level of dreck it suggests to me that we have mostly proven that the best statistical techniques we have available are entirely inadequate to the task at hand.

  61. Using NOAA data,

    They are predicting 3.6F by 2070.

    The trend in the USA from 1990 to 2011 is .22F / decade = 166 years for 3.6F

    HOWEVER

    The trend from 2000 to 2011 is -0.58 degF / Decade which means in 55 years it will be -3.3F COLDER.

    I suspect the real number will be in the middle.

  62. I would like to third Mario Lento’s proposition for the adoption of the phrase “guess laundering” – and further propose that the outfit doing it be known as the “guess laundry”.

  63. Steven Mosher says: May 16, 2012 at 12:29 am Cressie is the main man in spatial stats today, specifically spatio-temporal stats.
    What a pity he does not direct his special skills into topics that matter more, using data that mean more. There is plenty of hard science needing such skills.

  64. Dave WendtB/b> says:
    May 16, 2012 at 1:31 pm
    If the best we’ve got is producing this level of dreck it suggests to me that we have mostly proven that the best statistical techniques we have available are entirely inadequate to the task at hand.

    Maybe I’m looking at this too hard, but perhaps the difficulties lie in the fact that most people calling themselves “climate scientists” are statisticians rather than, oh, say, scientists whose field of study actually includes climate in some fashion…

  65. The model predicts Canadian North East winter temperatures to rise by 6 C by 2070. During the 64 year period from 1948 to 2011, the Canadian national winter temperatures according to Environment Canada rose only by 1.5 C including the Arctic Mountains and Fiords. In the Arctic Tundra region, they rose 2.1 C. In the North Eastern Forest region , they rose 1C. In the Atlantic Coast area they rose 0.7 C. So nowhere in the North eastern region of Canada have there been winter temperature changes over a much longer period that even remotely approach the model predictions . I am constantly amazed how the modellers are able to predict with great accuarcy the temperatures for some remote future periods when they themselves may not be around to account for their past predictions ,and are complete failures or with no proven credibilty when it come to predicting the next year or next decade. This latest modelling attempt does not seem credible when looking at current or past climate trends in Canada .

  66. I just noticed that in my previous post , I stated the annual temperature rises during the last 64 years for the various regions of Eastern Canada not the winter rises as I intended . Here are the correct winter temperature departures or rises for the north eastern regions of Canada during the last 65 years .

    ATLANTIC COAST 0.5C
    NORTHEASTERN FORESTS 1.9C
    ARCTIC MTNS & FIORDS 2.3 C
    ARCTIC TUNDRA 3.2 C
    [ data per Environment Canada]
    The model of the above paper seems to projec a rise of 6 C in a period of about 30 years between 2041 and 2070

  67. “Though the models produced a wide variety of climate variables, the researchers focused on temperatures during a 100-year period: first, the climate models’ temperature values from 1971 to 2000, and then the climate models’ temperature values projected for 2041 to 2070.”

    What about the prediction 2012-2041? What… the model no good for the near term? Wouldn’t that mean by the year 2030, they could no longer predict what was going to happen in the year 2041? So, in 2030, all bets are off, anything could happen in the year 2041. How stupid is this study, if you can’t predict what the temperature will be next year or ten years from now, you CANT predict what the temperature will be 40 years from now either because the temperature 40 years from now depend on the earlier temperatures.

    I think they should use the period from 950-1050 AD as their calibration period and then predict what the temperature is going to be in 2013 and let’s see how accurate they are. The model would most likely be wrong by about 6C if not more.

    This is just another one of those unfalsifiable DOOM AND GLOOM scenarios to try and scare the public and politicians into action.

  68. ALCHESON

    You make some valid observations . My take is that the authors seem to use the temperature data from the past warming phase of the last 60 year climate cycle[1970--2000] in order to predict the warming phase [2040 to 2070 ] of the next climate cycle but they ignore the possible cooling phsase in between [2010 -2040.] If the cooling phase is severe like 1880-1910 , there can be cosiderable temperature drops in between and what will be the temperature at the end of the cooling phase is anyone’s guess.. The conditions during 2040 -2070 may not be similar to 1970-2000.

  69. Well I stopped reading when I got to that statistical concensus prediction; excuse me, projection from several climate models. Statistically sophosticated concensus or no; a concensus of idiots is still an idiot concensus.

    Since the climate models do not agree with each other then you can be sure that none of them is reliable, and a mathematical hodge podge is no more believable than any one of them. I don’t think you get any more credible information if you statisticate the average telephone number in the phonebooks, of New York, DC, and Philadelphia, and even throwing in LA and San Fran doesn’t make the “concensus” any more informative.

    Why not fix the models so they track the observed data, and forget about statistical concensus.

    And for that matter, why not fix the data, so that it really is a valid sampling of the climate continuous function of at least space and time, instead of a handfull of meaningless random samples.

    If you core drill a tree to get a tree ring stack, you still get a one dimensional sample of a three dimensional function, that tells you nothing significant about even that one tree’s history; let alone of the whole earth. Oops; if you counted the rings correctly you do get its age reasonably well. I guess that’s why they call it dendrochronology, and not climatology.

  70. “One of the criticisms from climate-change skeptics is that different climate models give different results, so they argue that they don’t know what to believe.”
    This is an incorrect assumption about this scientist (me) – and probably a majority of other skeptics – for two reasons. First, I do not work on a belief system. I work on data, mostly improving its woeful quality in climate work and cautioning sensible people to not waste time by not validating it first.
    Second, if you make an ensemble, you have to put uncertainty around it. This usually means that you have to know the uncertainty of the various input models. But this certainty is seldom, if ever, calculated correctly. First, because some variables are constrained before the model is run; and second, because the certainty of the model’s design should be derived from all of runs that have been put through it, unless there are large and agreed valid reasons for rejecting a run (like a typo). If you calculate the uncertainty of a medelling team’s efforts this inclusive way, then form the ensemble, the overall error bounds would be so large that any curve that looked about right would fit between them, meaning that nothing of value has been demonstrated.
    That’s part of the reason for some scepticism. You can’t cherry pick model runs any more that you can cherry pick trees for dendrothermometry, a topic of recent debate. I have noticed the absence of statements that modellers did not share results with other teams before submitting their favourite model run to the ensemble calculation. I have seen data that suggests some did. So ‘a priori’ has been degraded in meaning.

  71. Steven Mosher: “However, if you look at the information from various hindcasts ( and how they are wrong in some ways and right in others ) that information can be used in getting a better ensemble forecast.”

    Sophisticated mathematics can not get past correlated error among all the models. Even the things they got “right” in the past, such as temperature, must have been “right” for the wrong reasons. Correlated error in precipitation (Wentz) and surface albedo feedback (Roesch) are larger than the energy imbalance of interest. It will be interesting to see what their review of the diagnostic literature for the models says about the documented correlated errors.

Comments are closed.