Guest Post by Bob Tisdale
This post is similar in format to In Honor of Secretary of State John Kerry’s Global Warming Publicity-Founded Visit to Greenland… As you’ll see, like Greenland, the consensus of the climate models used by the IPCC show that the models do not simulate the surface temperatures for the contiguous United States over any timeframe from 1861 to present.
INTRODUCTION
We illustrated and discussed the wide ranges of modeled and observed absolute global surface temperatures in the November 2014 post On the Elusive Absolute Global Mean Surface Temperature – A Model-Data Comparison. Not long after came a post at RealClimate of modeled absolute global surface temperatures, authored by Gavin Schmidt, the head of the Goddard Institute of Space Studies (GISS). Gavin’s post is Absolute temperatures and relative anomalies. (Please read it in its entirety. I believe you’ll find it interesting.) Of course, Gavin Schmidt was downplaying the need for climate models to simulate Earth’s absolute surface temperatures.
In this post about the surface temperatures of the contiguous United States, we’ll present a few examples of why climate modelers need to shift their focus from surface temperature anomalies to absolute surface temperatures. Why? In addition to heat waves and cold spells, near-surface air temperatures play roles in model simulations of snow cover, drought, growing seasons, surface evaporation that contributes to rainfall, etc.
In the past, we’ve compared models and data using time-series graphs of temperature anomalies, absolute temperatures and temperature trends, and we’ll continue to provide them in this post. In this series, we’ve added a new model-data comparison graph: annual cycles based on the most-recent recent multidecadal period. Don’t worry, that last part will become clearer later in the post.
MODELS AND DATA
We’re using the model-mean of the climate models stored in the CMIP5 (Coupled Model Intercomparison Project Phase 5) archive, with historic forcings through 2005 and RCP8.5 forcings thereafter. (The individual climate model outputs and model mean are available through the KNMI Climate Explorer.) The CMIP5-archived models were used by the IPCC for their 5th Assessment Report. The RCP8.5 forcings are the worst-case future scenario.
We’re using the model-mean (the average of the climate model outputs) because the model-mean represents the consensus of the modeling groups for how surface temperatures should warm if they were warmed by the forcings that drive the models. See the post On the Use of the Multi-Model Mean for a further discussion of its use in model-data comparisons.
I’ve used the ocean-masking feature of the KNMI Climate Explorer and the coordinates of 24N-49N, 125W-66W to capture the modeled near-land surface air temperatures of the contiguous United States, roughly the same coordinates used by Berkeley Earth.
Near-surface air temperature observations for the contiguous U.S. are available from the Berkeley Earth website, specifically the contiguous United States data here. While the monthly data are presented in anomaly form (referenced to the period of 1951-1980), Berkeley Earth provides the monthly values of their climatology in absolute terms, which we then simply add to the anomalies of the respective months to determine the absolute monthly values. Most of the graphs, however, are based on annual average values to reduce the volatility of the data.
The model mean of surface temperatures at the KNMI Climate Explorer starts in 1861 and the Berkeley Earth data end in August 2013, so the annual data in this post run from 1861 to 2012.
ANNUAL NEAR-LAND SURFACE AIR TEMPERATURES – THE CONTIGUOUS UNITED STATES
Figure 1 includes a time-series graph of the modeled and observed annual near-land surface air temperature anomalies for the contiguous U.S. from 1861 to 2012. Other than slightly underestimating the long-term warming trend, at first glance, the models appear to do a reasonable job of simulating the warming (and cooling) of the surfaces of the contiguous United States. But as we’ll see later in the post, the consensus of the models misses the multidecadal warming from the early 1910s through the early 1940s.
Figure 1
Keep in mind, Figure 1 is how climate modelers prefer to present their models, in anomaly form.
Figure 2 gives you an idea of why they prefer to present anomalies. It compares the modeled and observed temperatures on an absolute basis. Not only do the models miss the multidecadal variations in the surface temperatures of the contiguous United States, the consensus of the models is running too cold. That of course would impact how well the models simulate temperature-related factors like snowfall, drought, crop yields and growing seasons, heat waves, cold spells, etc.
Figure 2
ANNUAL CYCLES
Climate is typically defined as the average conditions over a 30-year period. The top graph in Figure 3 compares the modeled and observed average annual cycles of the contiguous U.S. surface temperatures for the most recent 30-year period (1983 to 2012). Over that period, data indicate that the average surface temperatures for the contiguous U.S. varied from about +0.0 deg C (+32 deg F) in January to roughly +24.0 deg C (+75 deg F) in July. On the other hand, the consensus of the models show they are too cool by an average of about 1.4 deg C (2.5 deg F) over the course of a year.
Figure 3
You might be saying to yourself, it’s only a model-data difference of -1.4 deg C, while the annual cycle in surface temperatures for the contiguous U.S. is about 24 deg C. But let’s include the annual cycle of the observations for the first 30-year period, 1861-1890. See the light-blue curve in the bottom graph in Figure 3. The change in observed temperature from the 30-year period of 1861-1890 to the 30-year period of 1983-2012 is roughly 1.0 deg C, while the model-data difference for the period of 1983-2012 is greater than that at about 1.4 deg C.
THE MODELS ARE PRESENTLY SIMULATING AN UNKNOWN PAST TEMPERATURE-BASED CLIMATE IN THE CONTIGUOUS UNITED STATES, NOT THE CURRENT CLIMATE
Let’s add insult to injury. For the top graph in Figure 4, I’ve smoothed the data and model outputs in absolute form with 30-year running-mean filters, centered on the 15th year. Again, we’re presenting 30-year averages because climate is typically defined as 30 years of data. This will help confirm what was presented in the bottom graph of Figure 3.
The models obviously fail to properly simulate the observed surface temperatures for the contiguous United States. In fact, the modeled surface temperatures are so cool for the most recent modeled 30-year temperature-based climate that they are even below the observed surface temperatures for the period of 1861 to 1890. That is, the models are simulating surface temperatures for the contiguous U.S. over the last 30-year period that have not existed during the modeled period.
Figure 4
For the bottom graph in Figure 4, I’ve extended the model outputs out into the future, to determine when the models finally simulate the temperature-based climate for the most-recent 30-year period. The horizontal line is that average data-based temperature for the period of 1983-2012. Clearly, the future models are out of sync with reality by more than 3 decades.
Keep the failings shown in Figure 4 in mind the next time an alarmist claims some temperature-related variable in the contiguous U.S. is “just as predicted by climate models”. Nonsense, utter nonsense.
30-YEAR RUNNING TRENDS SHOW THAT THERE IS NOTHING UNUSUAL ABOUT THE MOST RECENT RATE OF WARMING FOR THE CONTIGUOUS UNITED STATES
The top graph in Figure 5 shows the modeled and observed 30-year trends (warming and cooling rates) of the surface air temperatures for the contiguous U.S. If trend graphs are new to you, I’ll explain. First, note the units of the y-axis. They’re deg C/decade, not simply deg C. The last data points show the 30-year observed and modeled warming rates from 1983 to 2012, and it’s shown at 2012 (thus the use of the word trailing in the title block). The data points immediately before it at 2011 show the trends from 1982 to 2011. Those 30-year trends continue back in time until the first data point at 1890, which captures the observed and modeled cooling rates from 1861 to 1890 (slight cooling for the data, noticeable cooling for the models). And just in case you’re having trouble visualizing what’s being shown, I’ve highlighted the end points of two 30-year periods and shown the corresponding modeled and observed trends on a time-series graph of temperature anomalies in the bottom cell of Figure 5.
Figure 5
A few things stand out in the top graph of Figure 5. First, the observed 30-year warming rates ending in the late-1930s, early-1940s are comparable to the most recent observed 30-year trends. In other words, there’s nothing unusual about the most recent 30-year warming rates of the surface air temperatures for the contiguous U.S. Nothing unusual at all.
Second, notice the disparity in the warming rates of the models and data for the 30-year period ending in 1941. According to the consensus of the models, the near-surface air of the contiguous United States should only have warmed at a rate of about 0.12 deg C/decade over that 30-year period…if the warming there was dictated by the forcings that drive the models. But the data indicate the contiguous U.S. surface air warmed at a rate that was almost 3.5 deg C/decade during the 30-year period ending in 1941…almost 3-times higher than the consensus of the models. That additional 30-year warming observed in the contiguous United States, above and beyond that shown by the consensus of the models, logically had to come from somewhere. If it wasn’t due to the forcings that drive the models, then it had to have resulted from natural variability.
Third thing to note about Figure 5: As noted earlier, the observed warming rates for the 30-year periods ending in 2012 and 1941 are comparable. But the consensus of the models show, if the warming of the near-surface air of the contiguous United States was dictated by the forcings that drive the models, the warming rate for the 30-year period ending in 2012 should have been noticeably higher than what was observed. In other words, the data show a noticeably lower warming rate than the models for the most-recent 30-year period.
Fourth: The fact that the models better simulate the warming rates observed during the later warming period is of no value. The model consensus and data indicate that the surface temperatures of the contiguous United States can warm naturally at rates that are more than 2.5 times higher than shown by the consensus of the models. This suggests that the model-based predictions of future surface warming for the contiguous U.S. are way too high.
CLOSING
Climate science is a model-based science, inasmuch as climate models are used by the climate science community to speculate about the contributions of manmade greenhouse gases to global warming and climate change and to soothsay about how Earth’s climate might be different in the future.
The climate models used by the Intergovernmental Panel on Climate Change) IPCC cannot properly simulate the surface air temperatures of the contiguous United States over any timeframe from 1861 to present. Basically, they have no value as tools for use in determining how surface temperatures have impacted temperature-related metrics (snowfall, drought, growing periods, heat waves, cold spells, etc.) or how they may be impacting them presently and may impact them in the future.
As noted a few times in On Global Warming and the Illusion of Control – Part 1, climate models are presently not fit for the purposes for which they were intended.
OTHER POSTS WITH MODEL-DATA COMPARISONS OF ANNUAL TEMPERATURE CYCLES
This is the third post of a series in which we’ve included model-data comparisons of annual cycles in surface temperatures. The others, by topic, were:
- Near-land surface air temperatures of Greenland
- Sea Surface Temperatures of the Main Development Region of Hurricanes in the North Atlantic
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.





In absolute terms some models run hot and some run cold
Mosher
Hot or cold, all of them are as GIGO wrong as the next one.
I starting to wonder: when will your code bound mind grasp GIGO?
Unfortunately, the multi-model mean is meaningless, as RGB has pointed out repeatedly. You have to compare each specific model with the measurements. For example, one of 42 CMIP5 models is the best, though still flawed.
https://rclutz.wordpress.com/2015/03/24/temperatures-according-to-climate-models/
In the only data worth using, the trend over the USA is basically dead flat since 2005.
Note that WORST has a huge spike.
Nice observation. We know the models run hot globally compared to observation. Especially in the all important tropics. The Christy chart that Gavin Schmidt hates.
What Bob therefore shows is that the models have no downscaled regional skill. This is not new news, but is yet another way to show most of what models produce is nonsense. And all the regional ‘doom’ stuff moreso.
Ristvan
I don’t want to put words in your mouth but according to your observation, removing the U.S. data, which models too cold, from the world data, which models too hot, would make the GCMs run even hotter?
Just so people can see.. I grabbed 4 random temp series from the USA



But you didn’t homogenize and pasteurize them.
The government says that temperatures have to be homogenized and pasteurized before they are safe to use.
Chuckle +10
Nevertheless, I’m sure if WORST did a good check they could find enough UHI affect records to fit their “regional expectations”
I wonder how many other University departments can afford to hire a corporate salesman (Mosh) as their frontman. !
By 2020 AGORE and minions will have flipped the script to the coming Global Cooling Catastrophe. The media will jump on board with confirmation of consensus and all peer reviewing will back their story.
Mosher, benben, Nick Stokes, et al.
I don’t trust you guys- you modelers. It wouldn’t matter a whit if you are the most technically proficient modeler in the whole wide world.
WHY?
It’s the company you keep and the road you have taken.
When I see you stand up and speak out against such elements of your collective meme as “97% consensus”, or “warmest ever recorded” and on and on and on… when I see your words are closer to the thinking of Feynman than Gore, then I’ll change my opinion.
…+ 10,000 stars ! ;o)
Well said, Alan 🙂
Ive said many times that cooks paper is crap.
I dont believe in AGW because of 97%
I’ll say more later on a phone call
https://judithcurry.com/2013/07/27/the-97-consensus-part-ii/#comment-354114
“When I see you stand up and speak out against such elements of your collective meme as “97% consensus”, or “warmest ever recorded” and on and on and on… when I see your words are closer to the thinking of Feynman than Gore, then I’ll change my opinion.”
really? I predict you WONT change your opinion because its not based in facts
“Willard
“Cook & al is not criticized because it’s crap, timg56, but because it participates in consensus-building. ”
wrong.
1. it has been criticized because it is crap.
2. it does not participate in consensus BUILDING
a) it participates in polarizing
b) it participates in silencing
c) it participates in consensus SELLING, but none of the customers are buying it.”
############
I can only speak from my professional experience conducting content analysis. They did not follow well known protocals, despite willard’s arm waving to the contrary.
There were no exemplars.
There was no formal training of the coders.
There were no measurements of inter-coder variability
There was no renorming.
Coders are a factor and they unbalanced the design by not having coders do the same number of ratings.
They had no controls for confirmation bias to prevent a false consensus effect.
The only thing of merit they did was recode those items that had different scores and even there they did not follow proper procedures.
On Feynman?
I’ve quoted him here many times..
basically what did Feynman do when face with conflicting theory and data.
Not what he taught freshman… but what he actually DID.
So.. besides me who said there is no proof on science? yes Feynman.
Curious about what he did when theory and data diverged?
which did he reject? in practice… what did he do in hard cases?
be careful……
“Mosher, benben, Nick Stokes, et al.
I don’t trust you guys- you modelers. It wouldn’t matter a whit if you are the most technically proficient modeler in the whole wide world.
WHY?
It’s the company you keep and the road you have taken.”
the company I keep?
there is some science for you. you dont like Judith Curry? too funny.
Anyway.. if you want to see what I helped put together for skeptics… read this
http://static.berkeleyearth.org/pdf/skeptics-guide-to-climate-change.pdf
Nice CO2 AGW sale brochure, Mosh !
The typical twists of the low end salesman.
Very first page has two LIES on it.
Try again !!!
Did you get John Cook’s help with that piece of farce?
Steven,
Good grief. You put that brochure together for skeptics? If I were to offer one piece of evidence to support my assertions…
You hang out at Judith Curry’s place and you hang out here. Your purposely obtuse and thin rationalization trying to claim literal occurrence as cover for a figurative involvement is ever so transparent. Perhaps I’ve been giving you too much credit… specifically, in case you really don’t get it- the company you keep is the figurative global climate fearosphere. If you say that you went somewhere else and spoke out against one thing I mentioned, I’ll take your word for it, but what of the rest? “Warmest ever recorded”, etc. Too many propaganda statements from your side to mention. When have you ever come here and spoken out against any of that? On the contrary, your typical action is a cryptic hit and run rationalization in support of the latest fear meme.
I’ve no idea what you’re on about regarding you and Feynman. As has been pointed out to you many times, your typical cryptic dialogue and veiled innuendos don’t communicate much to those of us who are somewhat slow and require others to actually tell us what your words mean, rather than have us guess.
in case you really don’t get it- the company you keep is the figurative global climate fearosphere.
Speaking personally, I need the Other Side. It was hashing it out with the VeeV and others that helped us correct problems with our own work. (The Other Side needs us, too, though they do not seem to realize it.)
And in order to speak with them it is necessary to remain on speaking terms.
So true. I find it quite important to keep engaging people that really disagree with our work, just to keep your mind sharp.
” just to keep your mind sharp.”
Its not working for you !!
Not sure what effect it has on my mind. But it sure has a positive effect on my work.
Steven Mosher, July 4, 2016, 1:07pm
Yeah, they do.
**************************************
Oh, boy, do I wish I had not ruined my 4th of July by coming to this thread.
Ha! I will NOT let that happen. I will find some good music…. 🙂
(P.S. Sure wish Marcus would be taken off moder@tion hold…. hang in there, Marcus.)
I’m hangin’ on dear Janice, but my rope is getting very thin !!! LOL and Happy Independence Day !!
P.S. Only took 10 hours to be approved !! LOL
It was at July 5, 2016 at 2:14 am
I’m asleep at that time. Expecting comment approval at such a time is not a reasonable expectation.
Added:
BTW, It was approved 5 hours ago, BTW.
…Anthony, really ? 10 hours for a comment to get through ?
Improve the quality of your commentary, and you’ll not need moderation holds.
..Anthony, in honor of the forth of July, I salute you…no matter how much you and Dr. S. dislike me ! Cheers…..
I don’t dislike you, but I do dislike your one-liner comments that have no substance.
Antidote to warmist nausea:
“Best Friend” (military homecoming and DOGS — yay! 🙂 )
(youtube)
And!
GO, US ARMED FORCES!!!
(youtube)
THEY know what today is all about! 🙂
Thank you.
#(:))
(happy again!)
Bob: Figure 4 shows that all the data used prior to 30 years ago is HOG WASH and worthless. (Similar to the flat trend for the Argo data.) As an Engineer, if I had someone trying to fool me with the worthless garbage (GIGO) that these folks are using, from the 19th century to 1980, I’d FIRE them if I could. (Alas,most of these people are in “tenured” positions. And NONE of them are Engineers…so they have no imperatives to make things that are USEFUL or work.)
Frankly, the models come out of this looking pretty good. If you’re going to claim that the offset makes the models look bad, you’re also going to have to explain how the models that you claim look so bad, manage to track the data so well expressed as anomalies.
Philip Schaeffer July 5, 2016 at 12:49 am
TUNING! They are carefully tuned to reproduce the anomalies.
w.
Bob,
If I’m reading figures 1 and 2 correctly then, according to your source, the mean of the models covering the con-US is *underestimating* the observed warming, both in terms trend and absolute temperature. Is this right?
What you need to understand is that the BEST fabrication used, has nothing much to do with the “observed” warming at all.
But that *is* what it shows, right? The con-US land temperature record, as produced by BEST using peer reviewed methods, shows warming that is faster in terms of trend and higher in terms of absolute temperatures, than those produced by the model mean (at least as these are interpreted by Bob).
You are correct, and honestly, this whole “the models are completely wrong” theme on WUWT is running a bit thin when it’s pretty obvious from these graphs that they’re doing just fine.
As noted somewhere above, models are more about replicating patterns and system behaviours than exact temperatures, because the reason to make a model is not to exactly predict the temperature 50 years from now, but to show the effect of certain things (mostly green house gasses but also other factors) on the climate. So from that perspective the models are doing really very well, and the modern models (e.g. CIMP6) even better.
“So from that perspective the models are doing really very well, and the modern models (e.g. CIMP6) even better.”
Meh. I can always fit a cubic polynomial to a given time series better than (or, at least as well as) a quadratic one. That does not mean that a cubic polynomial fit can generally extrapolate such a series forward better than a quadratic one. It doesn’t even mean that I have any clue how the series will behave going forward using either fit.
You really are coming off as numerically illiterate, Benben.
The takeaway is not that the models are conservative, but that they are wrong.
“As noted somewhere above, models are more about replicating patterns and system behaviours…”
Give me enough parameters, and I can do that with just about any model.
However, if you do not have the right absolute temperatures, then you cannot possibly be modeling reality. GIGO writ large.
Bartemis said- “However, if you do not have the right absolute temperatures, then you cannot possibly be modeling reality. GIGO writ large.”
But Bartemis, benben said “modelling isn’t meant to replicate reality exactly as it is”.
LOL You see, it’s fine if the models are wrong, they were never meant to be right! 🙂
haha bartemis, why please do show us that you can replicate the historical climate patterns better than the current models. Use however many parameters you want.
I’m going to guess your response: *crickets chirping*
It’s a sad fact that the commenters here claim they can do so much, but never seem to get around to doing anything. With the exception of the graphs produced above by Mr. Tisdale, for which my thanks.
benben
“haha bartemis, why please do show us that you can replicate the historical climate patterns better than the current models. Use however many parameters you want.”
Oh. My. Word. The climate models can only replicate the historical climate patterns because the historical climate data was fed into them! I bet anyone can replicate the historical climate patterns with the same degree of accuracy as the models, if they also have access to the historical weather pattern data!!
If the models do not match the historical climate patterns, they tweak them until they do. That does NOT equate with them being able to “replicate” the historical climate patterns on their own. You do understand the difference….right?
“I bet anyone can replicate the historical climate patterns with the same degree of accuracy as the models, if they also have access to the historical weather pattern data!!”
Exactly. It’s just curve fitting to an arbitrary model that fails to replicate critical behavior.
That is just a straight up falsehood (I would call it a lie but that would imply that you actually knew you were wrong, which you probably don’t). And you would know if you would spend any time – any time at all – actually looking how models are constructed. Once again, I invite you to take a look at the user manuals (or *gasp* the actual open source publicly available code) of the community earth systems model.
“… I invite you to take a look at the user manuals (or *gasp* the actual open source publicly available code) of the community earth systems model…”
———————————-
I’ve read “harry.readme”. That gave me a pretty good idea about climate models and the modelers. Does that count?
Benben, you are clueless. It is doesn’t match the temperatures, it doesn’t match reality. Simple as that. It doesn’t matter if you can extrude some composite quantity that looks vaguely like the same real composite quantity. If it doesn’t match the actual temperatures, it’s just throwing darts at a wall, and drawing a circle around a cluster and calling it a bull’s eye.
This is known as the Texas Sharpshooter’s Fallacy.
Well bartemis, I’m still waiting for your regression function that perfectly matches the historical temperature trend. Show us how numerically clueless I am by actually doing it. I’ll be watching this thread for a couple of more days!
Cheers,
Ben
benben, you are still on about open source code and having access to it. So put a link to your open source code in your next comment or stop commenting.
As for tweaking, yep they do. We are now on to CMIP6 I believe. However, I am about to say something nearly heretical. Because El Nino/La Nina conditions echo so well into the next year of SST’s, the window is cracking open that oceans must play a major role and may even be more involved in the long term trend than previously thought. Why do I think the window is cracking? Because the current models continue to need to be tweaked, so something is still not right about them. Model construction is really not a bad thing to do. It’s most useful function is to point out what you don’t have right yet. It’s greatest danger is that it may cause you to miss a confounding variable that is the true cause of both x and y.
Models used in agriculture went through a phase such as that. Crops are highly susceptible to viruses. It’s all well and good to use models to find out how the virus works, how to kill the virus, and create chemicals made for that purpose, to kill the virus that was found in the ground and in the plant. Yet the viruses kept coming back. Turns out the confounding variable was often insects that delivered the viruses. So then they had to figure out how the insects got the virus. Thus the search for confounding variables became a key component of crop disease research.
In climate research, CO2 has become the end game, with incentives to keep it the end game. But my guess is that the models will keep failing,especially given the complexity of climate. The question is, will plausible confounding variables be allowed into the game or will we be onto the new and improved tweaked CMIP3245?
So benben, cough up that link so we can read the same code you are reading. This blog is filled with code readers and writers who would be highly interested in an intelligent vigorous debate.
there you go: http://www.cesm.ucar.edu/models/cesm1.2/
Have fun
benben @ur momisugly July 6, 2016 at 5:04 am
“I’m still waiting for your regression function that perfectly matches the historical temperature trend.”
Do a least squares fit. Duh.
The historical record is well approximated as a trend plus an approximately 65 year cyclical phenomenon. This pattern was laid in well before CO2 levels had risen appreciably above the purported pre-industrial level, and has nothing to do with humans. The most likely prognostication is that the pattern will continue
http://i1136.photobucket.com/albums/n488/Bartemis/ex_zpsgxhcx6bz.jpg
That’s just a picture. Please share with us the technical details. Your formula, your r2, and how that compares to the r2 of the CIMP5 models.
Forrest,
I suspect you are referring to models that were run in the past that projected/predicted “future” global temperature increases. As time has moved along, we can see that those past projections were wrong, because the current temperature trend in reality is cooler than those models predicted it would be.
The models being discussed here are US contiguous temps only and presented material does not run into the future.
YOU WROTE:
“Climate science is a model-based science,”
MY RESPONSE:
Computer games and inaccurate predictions are not science — they are one of many ways that people with science degrees can waste the taxpayers’ money.
The process of climate change is not understood well enough to build a useful model.
40 years of inaccurate projections is proof the climate physics used for the current models (CO2 is evil) is wrong.
Climate “science” is mainly climate politics with three goals:
(1) Falsely demonizing CO2 in an effort to empower central governments “to save the Earth”,
(2) Creating a new “green” industry to enrich leftists with government subsidies and loans, and
(3) Attacking the foundation of economic growth: cheap sources of energy … in an effort to promote slow growth socialism ( by falsely claiming the slow economic growth inherent with socialism is actually good news, rather than bad news, because slower economic growth will slow the destruction of the Earth from that satanic gas CO2 )
Climate Change Blog for non-scientists;
No ads
No money for me
A public service
Leftists should stay away
http://www.elOnionBloggle.Blogspot.com
I am impressed (mostly) by the posts from engineers on this forum. Why is this?
Because if an engineer gets it wrong people die
I learnt many years ago the golden rule in engineering: Anything is only as strong as its weakest link. Hence the all-important factors of safety. The uncertainties are more important than the certainties
Another post here that impressed me was by an Arborist who quickly debunked the assumption that measured temperature was the only control over the flowering dates of cherries. The cheek of the man – to learn this from the field, without a degree
If engineers constructed the models I am sure that we would quickly see that models are virtually useless in predicting the future. They would incorporate the uncertainties. Shock horror
Modelling must be fun. One not need leave the air conditioned office. Or, better still, sit beside the pool with the lap top, drinking beer
Hi Michael,
Here is a predictive model that works, although only for four months into the future, and only if there are no big volcanoes.
My formula is: UAHLT Calc. = 0.20*Nino3.4SST +0.15
where
Nino3.4 is the temperature anomaly in degrees C of the SST in the Nino3.4 area, as measured by NOAA in month m. Nino3.4 comprises about 1% of the Earth’s surface area.
UAHLT is the Lower Tropospheric temperature anomaly of Earth in degrees C as measured by UAH in month (m plus 4);
It is apparent that UAHLT Calc. is substantially higher than UAHLT Actual for two periods, each of ~5 years, BUT that difference could be largely or entirely due to the two major volcanoes, El Chichon in 1982 and Mt. Pinatubo in 1991.
In Jan2008 I demonstrated that dCO2/dt changed ~contemporaneously with UAHLT, and its integral atmospheric CO2 changed 9 months later. Now we can use the Nino3.4 anomaly to predict changes in UAHLT and thus in CO2 up to (9+4=) 13 months later.
At this rate, we’ll be getting to reliable multi-decadal predictions before you know it… 🙂
Regards, Allan
https://www.facebook.com/photo.php?fbid=1030751950335700&set=a.1012901982120697.1073741826.100002027142240&type=3&theater
Replotting for the period after the influence of the two major volcanoes had abated (El Chichon in 1982 and Mt. Pinatubo 1991):
My formula is: UAHLT Calc. = 0.20*Nino3.4SST +0.15
where
Nino3.4 is the temperature anomaly in degrees C of the SST in the Nino3.4 area, as measured by NOAA in month m. Nino3.4 comprises about 1% of the Earth’s surface area.
UAHLT is the Lower Tropospheric temperature anomaly of Earth in degrees C as measured by UAH in month (m plus 4);
Plotting from 1Jan1996 to (about) now:
Note that UAHLTCalc has been moved forward 4 months in time to show alignment – in reality it leads actual UAHLT by about 4 months.
Note how well the two plots track each other in detail – it must be coincidence, spurious correlation, etc. – we KNOW that CO2 drives temperature. 🙂
This relationship has been published before.
See Nature, Vol.367, p.325, 27Jan1994 co-authored by John Christy and Richard McNider.
https://www.facebook.com/photo.php?fbid=1033049503439278&set=p.1033049503439278&type=3&theater
Maybe this time it will show the graph…
_______
Replotting for the period after the influence of the two major volcanoes had abated (El Chichon in 1982 and Mt. Pinatubo 1991):
My formula is: UAHLT Calc. = 0.20*Nino3.4SST +0.15
where
Nino3.4 is the temperature anomaly in degrees C of the SST in the Nino3.4 area, as measured by NOAA in month m. Nino3.4 comprises about 1% of the Earth’s surface area.
UAHLT is the Lower Tropospheric temperature anomaly of Earth in degrees C as measured by UAH in month (m plus 4);
Plotting from 1Jan1996 to (about) now:
Note that UAHLTCalc has been moved forward 4 months in time to show alignment – in reality it leads actual UAHLT by about 4 months.
Note how well the two plots track each other in detail – it must be coincidence, spurious correlation, etc. – we KNOW that CO2 drives temperature. 🙂
This relationship has been published before.
See Nature, Vol.367, p.325, 27Jan1994 co-authored by John Christy and Richard McNider.
https://www.facebook.com/photo.php?fbid=1033112303432998&set=a.1012901982120697.1073741826.100002027142240&type=3&theater
“Modeling must be fun. One not need leave the air conditioned office. Or, better still, sit beside the pool with the lap top, drinking beer.”
EVEN BETTER: With all the money you are making as a “scientist” on the goobermiont payroll, you can afford to hire real models to strut by as you sit beside a swimming pool during your two-hour two-martini lunch break.
Of course they will all be modeling bikinis that you are interested in buying “for the wife”.
You can lay there feeling secure that your climate prediction is so long-term that you will be dead and gone before anyone can prove you wrong!
And you can tell everyone you know that you are working to save the Earth !
Or you could admit the truth:
The haphazard data collection, and arbitrary adjustments, when estimating the average temperature of the planet, and the predictions of the average temperature 100 years in the future … are a complete waste of the taxpayers’ money.
In the presentation here, models are not being compared with any credible array of vetted station records, but with BEST’s numerical sausage of model projections, with finely minced snippets of actual data providing only a taste of verisimilitude. I’m surprised that anyone unconnected with the purveyors of that ersatz would swallow it.
You are making me hungry, must go snack on something. Yumm, snippets of verisimilitude!
The 1 degree C difference between models and observations probably explains some of the model failure to reproduce the increase in precipitation associated with the warming. According to the Clausius–Clapeyron relation, a 1C increase in temperature increases the water holding capacity of the air by about 7%. The water cycle is a pretty important part of the climate system and despite water vapor being a greenhouse gas, any acceleration of the cycle may be a net negative feedback to any warming.
For AGWer commenters who continue to tell me to “read the code”, most climate models involved in IPCC sanctioned experiments do not publish codes in their entirety. You have to be a “member”. So unless you have a direct link to the “codes” you keep telling me to read, move on. Your suggestion is a poor example of intelligent and informed debate, telling on your lack of understanding of models and how they are driven or forced, not mine.
Pam,
Maybe you missed the post where benben informed us that his “flatmate” uses computer models. Which of course, in benben’s world, means he is an expert on models by proximity to a modeler. Sadly, his roommate can only claim proximity to benben……:)
You have got to be kidding. No I did not read that. But I certainly questioned his acumen given his comments. His last comment in our debate was a bit like stick your tongue out and run. So it seems I was debating a child.
I forgot about that. Apologies to Mosh and Nick for lumping them in with benben.
It works through a two track system where the published models are open source and relatively recent (sometime in 2014 I believe), while the cutting edge current models are only for members that put in effort to develop code themselves, and publish papers based on that new work.
But a 2 year old model is more than good enough to answer most of the questions here. So go please do go ahead and look at it yourselves: http://www.cesm.ucar.edu/models/cesm1.2/
And indeed the models I work on are not climate models. Actual climate modelers don’t come here because they don’t like the toxic atmosphere, I’ve been told.
Cheers,
Ben
Regarding the use of 1% annual increase in CO2 (either spun up or instant) which is calculated to increase temperature which then is echoed throughout the IPCC suite of model calculations, the absolute temperature difference of 1 degree C may be allowing a window into the presence of a possible confounding factor. It is possible that the amount of heat being added to the atmosphere is from oceanic discharging of stored solar heat through evaporation. Put that in the model and you might get not only a similar T trend, but also a similar absolute T.
There is evidence that at least two researchers, using CMIP models (whichever is the current model), are using SST data to drive a model (though they can’t use the data directly). Unfortunately, they are also bound by the model’s idealized input of 1% annual increasing CO2 instead of it being an output.
I am making educated guesses here but at least we now have model research using SST as the forcing on CMIP models:
“CFMIP Patterned SST forcing dataset”
A patterned SST forcing dataset is required for what was the amipFuture experiment in CFMIP-2/CMIP5, now called amip-pat-4K in CFMIP/CMIP6. This is a normalised multi-model ensemble mean of the ocean surface temperature response pattern (the change in ocean surface temperature (TOS) between years 0-20 and 140-160, the time of CO2 quadrupling in the 1% runs) from thirteen CMIP3 AOGCMs (cccma, cnrm, gfdlcm20, gfdlcm21, gisser, inmcm3, ipsl, miroc-medres, miub, mpi, mri, ncar-ccsm3, and ncar-pcm1.)
http://cfmip.metoffice.com/CMIP6.html
What the design is, I think, is to compare the two outcomes, the regularly forced model output (which has already been done and is available to researchers as the control), and an SST forced run. This one should be interesting.
Hi Pamela,
I did that SST-forced model run for you.
See my plots above, at
https://wattsupwiththat.com/2016/07/04/in-honor-of-the-4th-of-july-a-few-model-data-comparisons-of-contiguous-u-s-surface-air-temperatures/comment-page-1/#comment-2252543
and
https://wattsupwiththat.com/2016/07/04/in-honor-of-the-4th-of-july-a-few-model-data-comparisons-of-contiguous-u-s-surface-air-temperatures/comment-page-1/#comment-2253260
As you can see, the equation is extremely complicated, and requires the very latest in computing power, the new “Son of Cray” computer (in Scottish Gaelic “MacCray”).
The R2 for the two plots (after 1Jan1996) is 0.55 – not bad for two unrelated natural datasets.
Of course we all KNOW that CO2 drives temperature, so it must be spurious correlation.
Best personal regards, Allan 🙂