Another model failure – seeing a sea of red where there is none

Seeing red just took on a whole new meaning – Anthony

Model-Data Comparison – Sea Surface Temperature Anomalies – November 1981 through September 2012

Guest post by Bob Tisdale

This is not the monthly sea surface temperature update. See the post September 2012 Sea Surface Temperature (SST) Anomaly Update.

CMIP3 (IPCC AR4) HINDCASTS/PROJECTIONS

I included the CMIP3 (IPCC AR4) multi-model mean outputs (hindcasts/projections) in a monthly sea surface temperature anomaly update for the first time six months ago in March 2012 Sea Surface Temperature (SST) Anomaly Update – A New Look. It was suggested that the model-data comparison should not serve as the monthly update, so I’ve provided it separately. I’ll try to update the model-data comparison every six months or so.

The graphs include the multi-model mean of the CMIP3 hindcasts/projections for sea surface temperatures, presenting them in comparisons to the observed data. The observed and modeled linear trends are also shown. This is done for the global, hemispheric and ocean basin sea surface temperature anomalies. As you will recall, CMIP3 is the climate model archive used by the IPCC for its 4thAssessment Report (AR4).

The multi-model mean and linear trends of the CMIP3 model simulation data definitely make the graphs busier. Refer to the Global sea surface temperature anomaly graph. We added the smoothed data (13-month running-average filter) on a trial basis a few months ago, and readers requested that we keep the smoothed data. On some occasions, the trend lines may obscure the most recent changes in the dataset.

(1) Global Sea Surface Temperature Anomalies

TREND MAPS

Modeled versus observed correlations with time from 1982 to 2011 are shown in the following two maps. The scale below each map is correlation coefficient, not temperature. A positive correlation coefficient of 1.0 (hot pink) would indicate an area warmed linearly from 1982 to 2011, while a negative correlation coefficient of -1.0 (purple) would indicate an area cooled linearly over that period. Basically, what the maps are showing are the modeled and observed warming and cooling trend patterns. There are no similarities. Keep those two images in mind the next time you see a peer-reviewed paper that projects regional climate on decadal or multidecadal bases. The modelers have no hope of doing so unless they can predict ENSO and its impacts on regional sea surface and land surface temperatures. They simulate both poorly.

(2) Modeled and Observed Correlations With Time (REVISED:  I altered the title block.)

NOTES ABOUT THE MODEL-OBSERVATION COMPARISONS

The model-observations comparisons serve as updates to two of my favorite posts: Satellite-Era Sea Surface Temperature Versus IPCC Hindcast/Projections Part 1 and Part 2. Refer to those posts for the discussions of the monumental differences between the models and observations. They are also presented in my first book If the IPCC was Selling Manmade Global Warming as a Product, Would the FTC Stop their Deceptive Ads?, in Section 8. A few model-data comparisons were also provided in my new book Who Turned on the Heat? – The Unsuspected Global Warming Culprit, El Niño-Southern Oscillation. More on that later.

The multi-model mean are not expected to present the year-to-year variations in sea surface temperature associated with the El Niño-Southern Oscillation (ENSO). Some of the models simulate ENSO; others don’t. The models that do attempt to simulate ENSO do a poor job of it. (This is documented in numerous peer-reviewed papers. Refer to the post Guilyardi et al (2009) “Understanding El Niño in Ocean-Atmosphere General Circulation Models: progress and challenges”) Each model produces ENSO events on its own schedule; that is, the modeled ENSO events do not reproduce the observed frequency, duration, and magnitude of El Niño and La Niña events. Since the multi-model mean presents the average of all of those modeled out-of-synch ENSO signals, they are smoothed out. For this reason, we are only concerned with the disparity in the modeled and observed trends.

And as shown above, the difference between the linear trends on a global basis is quite large. The model simulations hindcast/project a global sea surface temperature anomaly warming rate that is about 80% higher than the observed rate. Depending on the subset, the models perform better and worse. For example, the model-simulated rate of warming for Northern Hemisphere sea surface temperature anomalies is only about 24% higher than observed, while in the Southern Hemisphere, the models say the sea surface temperatures should be warming at a rate that is more than 2.5 times faster than the observed rate.

Keep in mind, the global oceans represent about 70% of the surface area of the globe, and the climate models show no skill at being able to simulate their warming. Global sea surface temperatures have warmed over the past 30+ years in response to ENSO events, not anthropogenic greenhouse gases. This was presented and discussed in detail in my recent book titled Who Turned on the Heat? – The Unsuspected Global Warming Culprit, El Niño-Southern Oscillation and in a good number of posts at my blog.

NOTE: CMIP5-based sea surface temperature outputs had been available through the KNMI Climate Explorer. I was hoping to use it in this post. It, unfortunately, was removed from the KNMI Climate Explorer. Hopefully it will return in the near future so that I can include it in the next update, to serve as a preview of how badly the newest models simulate sea surface temperatures in advance of the IPCC’s upcoming 5thAssessment Report.

The MONTHLY graphs illustrate raw monthly OI.v2 sea surface temperature anomaly data from November 1981 to March 2012, as it is presented by the NOAA NOMADS website linked at the end of the post. I’ve added the 13-month running-average filter to smooth out the seasonal variations. The trends are based on the raw data, not the smoothed data.

Last, the differences between models and observations are not discussed throughout the rest of the post. Feel free, however, to comment on the disparity between the models and the observations.

NINO3.4, INDIVIDUAL OCEAN BASIN AND HEMISPHERIC SEA SURFACE TEMPERATURE COMPARISONS

(3) NINO3.4 Sea Surface Temperature Anomalies

(5S-5N, 170W-120W)

(4) Northern Hemisphere Sea Surface Temperature (SST) Anomalies

(5) Southern Hemisphere Sea Surface Temperature (SST) Anomalies

(6) North Atlantic Sea Surface Temperature (SST) Anomalies

(0 to 70N, 80W to 0)

(7) South Atlantic Sea Surface Temperature (SST) Anomalies

(60S to 0, 70W to 20E)

(8) North Pacific Sea Surface Temperature (SST) Anomalies

(0 to 65N, 100E to 90W)

(9) South Pacific Sea Surface Temperature (SST) Anomalies

(60S to 0, 120E to 70W)

(10) Indian Ocean Sea Surface Temperature (SST) Anomalies

(60S to 30N, 20E to 120E)

(11) Arctic Ocean Sea Surface Temperature (SST) Anomalies

(65N to 90N)

(12) Southern Ocean Sea Surface Temperature (SST) Anomalies

(90S-60S)

INTERESTED IN LEARNING HOW WE KNOW MOTHER NATURE, NOT GREENHOUSE GASES, WARMED THE GLOBAL OCEANS OVER THE PAST 30 YEARS?

The sea surface temperature record indicates El Niño and La Niña events are responsible for the warming of global sea surface temperature anomalies over the past 30 years, not manmade greenhouse gases. I’ve searched sea surface temperature records for more than 4 years, and I can find no evidence of an anthropogenic greenhouse gas component. That is, the warming of the global oceans has been caused by Mother Nature, not anthropogenic greenhouse gases.

I’ve recently published my e-book (pdf) about the phenomena called El Niño and La Niña. It’s titled Who Turned on the Heat? with the subtitle The Unsuspected Global Warming Culprit, El Niño Southern Oscillation. It is intended for persons (with or without technical backgrounds) interested in learning about El Niño and La Niña events and in understanding the natural causes of the warming of our global oceans for the past 30 years. Because land surface air temperatures simply exaggerate the natural warming of the global oceans over annual and multidecadal time periods, the vast majority of the warming taking place on land is natural as well. The book is the product of years of research of the satellite-era sea surface temperature data that’s available to the public via the internet. It presents how the data accounts for its warming—and there are no indications the warming was caused by manmade greenhouse gases. None at all.

Who Turned on the Heat? was introduced in the blog post Everything You Every Wanted to Know about El Niño and La Niña… …Well Just about Everything. The Updated Free Preview includes the Table of Contents; the Introduction; the beginning of Section 1, with the cartoon-like illustrations; the discussion About the Cover; and the Closing.

Please buy a copy. (Paypal or Credit/Debit Card). It’s only US$8.00.

You’re probably asking yourself why you should spend $8.00 for a book written by an independent climate researcher. There aren’t many independent researchers investigating El Niño-Southern Oscillation or its long-term impacts on global surface temperatures. In fact, if you were to perform a Google image search of NINO3.4 sea surface temperature anomalies, the vast majority of the graphs and images are from my blog posts. Try it. Cut and paste NINO3.4 sea surface temperature anomaliesinto Google. Click over to images and start counting the number of times you see Bob Tisdale.

By independent I mean I am not employed in a research or academic position; I’m not obligated to publish results that encourage future funding for my research—that is, my research is not agenda-driven. I’m a retiree, a pensioner. The only funding I receive is from book sales and donations at my blog. Also, I’m independent inasmuch as I’m not tied to consensus opinions so that my findings will pass through the gauntlet of peer-review gatekeepers. Truth be told, it’s unlikely the results of my research would pass through that gauntlet because the satellite-era sea surface temperature data contradicts the tenets of the consensus.

SOURCES

The Reynolds Optimally Interpolated Sea Surface Temperature Data (OISST) are available through the NOAA National Operational Model Archive & Distribution System (NOMADS).

http://nomad3.ncep.noaa.gov/cgi-bin/pdisp_sst.sh

The CMIP3 Sea Surface Temperature simulation outputs (identified as TOS, assumedly for Temperature of the Ocean Surface) are available through the KNMI Climate Explorer Monthly CMIP3+ scenario runs webpage.  The correlation maps are available through the KNMI Climate Explorer as well.

About these ads

53 thoughts on “Another model failure – seeing a sea of red where there is none

  1. Bob,
    Your big red plot might be more impressive if you could say at the time what you are actually plotting. The text on the graph is unreadable, and you have to read down a long way to find out.

    It seems to be a “correlation coefficient” with time. Now it’s usual to calculate correlation coefficients between stationary random variables, but time? To get such a coefficient, you have to divide by the standard deviation of each variable. What did you use for the standard deviation of time? What does it mean?

  2. I have nothing against models. They usually look nice walking down the runway or in the catalog pictures. But when they do that, they’re trying to sell something I don’t really need and rarely looks the same in real life.

  3. PS I bought the book. I haven’t finished it but he’s a good communicator. He can make the complex simple. The book can give you a handle on one piece of the chaotic puzzle that is global climate.

  4. If you are comparing temperature data and the models with a start point in 1981, why aren’t they aligned at the same start point? The models start at a temp 0.1 lower than the reality, but they can’t (by definition) be ‘wrong’ on day 1. The divergence would be much more striking if the traces were aligned properly at the start point.

  5. In your first graph, the correlation coefficient (of temperature with time) is not very informative. It indicates how smooth the change is, not how fast. The multi-model mean is naturally smoother than the real temperatures, as you say “Since the multi-model mean presents the average of all of those modeled out-of-synch ENSO signals, they are smoothed out. ” So it’s no surprise that the correlation coefficient is closer to 1 for the multi-model mean.

    It would be better to show a map of the temperature trends, in degC/decade.

  6. HaroldW says:
    October 8, 2012 at 2:22 pm

    “In your first graph, the correlation coefficient (of temperature with time) is not very informative. It indicates how smooth the change is, not how fast. The multi-model mean is naturally smoother than the real temperatures, as you say “Since the multi-model mean presents the average of all of those modeled out-of-synch ENSO signals, they are smoothed out. ” So it’s no surprise that the correlation coefficient is closer to 1 for the multi-model mean.”

    Live by the sword, die by the sword – IPCC climate modelers always use the multi model mean as if it had any qualitative advantage over one model run. And I have never seen a valid defense of that approach by them.

    So – the very best they can come up with is an average of dozens of runs, too smooth to be real – and if that is the very best they can come up with, they should be measured by it.

    You can’t have it both ways.

  7. braddles says: “If you are comparing temperature data and the models with a start point in 1981, why aren’t they aligned at the same start point?”

    NOAA presents the Reynolds OI.v2 sea surface temperature anomalies with the base years of 1971-2000 at the NOMADS website. I used the same base years for the multi-model mean through the KNMI Climate Explorer.

  8. Note the pronounced annual cycle in the NH SST anomaly that developes after 2000. It’s not so pronounced in the SH. You see the same thing in the UAH troposphere temperatures.

    Something in the climate changed around 2000 – warmer summers compared to winters (in anomaly terms) – the opposite of what GHG warming predicts.

    Warmer summers and relatively cooler winters is the signature of decreased cloud cover or decreased aerosols.

  9. HaroldW says: “In your first graph, the correlation coefficient (of temperature with time) is not very informative. It indicates how smooth the change is, not how fast.”

    Actually, the maps are informative, because they indicate the differences in the “patterns” of warming (and cooling for the observations).

    HaroldW says: “It would be better to show a map of the temperature trends, in degC/decade.”

    Unfortunately, that option is not available through the KNMI Climate Explorer, and that’s my primary tool, so I presented what I could. My other option was to present the observed and modeled linear trends on zonal mean bases, like Figure 5-37 from my Who Turned on the Heat?:

    With the next model-data comparison six months from now, I’ll include the zonal-mean graphs for the Atlantic, Pacific and Indian Oceans so that the correlation maps make more sense. I’ve left myself a note to do that.

  10. DirkH says: October 8, 2012 at 2:32 pm
    “IPCC climate modelers always use the multi model mean (MMM) as if it had any qualitative advantage over one model run”

    I’m not sure that they do use it so faithfully. But I’m sure no-one claims that the MMM is a measure of climate variability. The MMM is much less variable than individual runs.

    And that’s HaroldW’s point. The MMM may give a reasonable estimate of trend. But in calculating the correlation one divides the trend by the standard deviation of the temperature estimate. And that is expected to be much less for a MMM than for the measured SST, just through averaging. That’s why the plot is red.

  11. Nick Stokes: In the KNMI Climate Explorer, there is no way for me to create linear trend maps in deg C/decade. One of the most common questions/requests of KNMI is the ability to do so. The option they provide is to correlate the change in temperature with time.

    At the KNMI Climate Explorer…

    http://climexp.knmi.nl/selectfield_obs.cgi?someone@somewhere

    …under observations, pick any dataset. On the next page, in the right-hand menu, select “correlate with a time series”. Across the top on the next page there are a number of standard choices identified as “System-defined monthly timeseries”. If you were to click in the “i” (help) button, KNMI explains:

    “System-defined monhly time series
    “As the roots of the Climate Explorer are in ENSO teleconnections, the following monthly time series are always available to correlate when the time scale is monthly. For other time scales only time is available at the moment.”

    “…Time: centered on 1-jan-2000. Very useful to do trend analyses; the regression against time is the trend.”

    Sorry you can’t read KNMI’s title blocks. I had to reduce the size of the maps so that I could work with them more easily. Here are the full-sized correlation maps.
    Observations:

    Models:

    KNMI tries to get too much info into the title block for the models, but rest assured I used the same parameters for the models and observations.

  12. Nick Stokes says: “I’m not sure that they do use it so faithfully. But I’m sure no-one claims that the MMM is a measure of climate variability. The MMM is much less variable than individual runs.”

    To the contrary, on the thread of the RealClimate post Decadal predictions…

    http://www.realclimate.org/index.php/archives/2009/09/decadal-predictions/

    …a visitor asked the very basic question, “If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?”

    Gavin Schmidt replied:

    “Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will [be] uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).”

    To paraphrase Gavin Schmidt’s comment, the individual ensemble members are unnecessary information. The model mean, which is the average of the ensemble members, represents the “forced component” of the anthropogenic and natural factors that serve as inputs to the climate models.

    Nick Stokes says: “And that’s HaroldW’s point. The MMM may give a reasonable estimate of trend. But in calculating the correlation one divides the trend by the standard deviation of the temperature estimate. And that is expected to be much less for a MMM than for the measured SST, just through averaging. That’s why the plot is red.”

    The multi-model mean does not give a reasonable estimate of trends. This can be seen in any comparison of models and observations on a zonal mean basis for any ocean basin, the Pacific for example:

    The basic reason the multi-model mean plot is red is because the models assume sea surface temperatures are warmed via a gradual rise in greenhouse gases, but there is no evidence in the sea surface temperature records that greenhouse gases have had any impact on sea surface temperatures for the last 30 years.

  13. Bob -
    “The regression against time is the trend.” Can you repeat the steps you took in KNMI, but select regression instead of correlation?

  14. While I agree with you that the northern hemisphere SST seems to follow ENSO, it appears to be the opposite with the southern hemisphere. Notice that the large drop in southern ocean SST occurs from 2006 to 2007 while it occurs from 2007 to 2008 in the Nino 3.4 (La Nina). The southern ocean with the Antarctic circum polar current appears to be in the drivers seat right now in my opinion. Record westerly winds driving the circumpolar current are causing unusually cold upwelling water that is migrating up the west coast of South America and Africa which is in turn currently bringing south easterly winds and cold water into the Nino 3.4 region The Southern ocean is the only current that circles the entire earth and feeds all the ocean basins. It would be interesting to see a cross correlation of the southern ocean sst with Nino 3.4 to see which really leads.

    http://wattsupwiththat.com/reference-pages/ocean-pages/ocean/

    http://www.physicalgeography.net/fundamentals/8q_1.html

  15. Bob says:

    To paraphrase Gavin Schmidt’s comment, the individual ensemble members are unnecessary information. The model mean, which is the average of the ensemble members, represents the “forced component” of the anthropogenic and natural factors that serve as inputs to the climate models.

    You are too generous. The variability incorporated into the models is to make the models look like they model/simulate natural variability, when in fact they do no such thing. But to the untutored eye, one set of wiggles looks pretty much like another.

    The variability between model runs is a con, pure and simple.

  16. Bob Tisdale says: October 8, 2012 at 3:55 pm

    “The basic reason the multi-model mean plot is red is because the models assume sea surface temperatures are warmed via a gradual rise in greenhouse gases”

    They model the whole physics. GHG concentrations are part of the forcing data. But in mdels too, many things determine SST. The relation to GHG isn’t an assumption – it’s a result.

    But you still seem to be under the impression that correlation coefficients are in some way a measure of trend. If you doubled (or halved) the MMM temperature values you’d get exactly the same corr plot. Same with SST.

  17. Nick Stokes says: “But you still seem to be under the impression that correlation coefficients are in some way a measure of trend. If you doubled (or halved) the MMM temperature values you’d get exactly the same corr plot. Same with SST.”

    Nope. In the post, I wrote: Basically, what the maps are showing are the modeled and observed warming and cooling trend patterns.

    Patterns is the key word, Nick.

  18. richcar 1225 says: “The southern ocean with the Antarctic circum polar current appears to be in the drivers seat right now in my opinion…”

    Multidecadal variations in the sea surface temperatures of the Southern Ocean may have been the “driver” all along. Unfortunately, we only have about 30 years of real data there.

  19. HaroldW says: “Can you repeat the steps you took in KNMI, but select regression instead of correlation?”

    Yes, if you’ll explain why you’re interested to those reading this thread.

    I needed to revise the contour range. Here’s the observations:

    And here’s the models:

    I could have tweaked the contour ranges a little more to bring out more colors, but the differences in the spatial patterns are becoming clear.

  20. Bob -
    Thanks. The latest graphs (in your 6:29 pm comment) show the actual and predicted temperature trends, in K/yr. Among other things, one can read from them that (a)North Atlantic has been rising faster than predicted; (b)Eastern Pacific has actually been cooling where increase was predicted; (c)Indian Ocean has warmed but not as fast as predicted. Your basin-specific time-series graphs, with their regression lines, show much the same information. But the maps show it a glance.

    [I don't know why you think the initial maps, showing correlation coefficient, can be used to compare trends. There is a relationship between correlation and trend. Roughly,
    trend (SST vs. time) = correlation(SST, time) * sqrt ( variance(SST) / variance(time) ).
    But the proportionality depends on the variance of the SST, and as you noted, because of the averaging of the simulated "natural variation", the variance of the multi-model mean tends to be less than that of the observations. (For that matter, it's less than the variance of a typical individual model run.) So one can't draw conclusions about trend bias in the multi-model mean, simply because its time-correlation coefficient is higher than that of the actual SST. I'll drop the subject now.]

  21. Nick Stokes says:

    They model the whole physics. GHG concentrations are part of the forcing data. But in mdels too, many things determine SST. The relation to GHG isn’t an assumption – it’s a result..

    It is a result of several assumptions.

  22. Bob, I think this is a key point when you say “To paraphrase Gavin Schmidt’s comment, the individual ensemble members are unnecessary information. The model mean, which is the average of the ensemble members, represents the “forced component” of the anthropogenic and natural factors that serve as inputs to the climate models. “

    Each model presumably tries to estimate both the trend and the variability. Of course, predicting the specific year when an El Nino will hit or a volcano will erupt is impossible, so the model variability will necessarily be different from the actual variability.

    When you start averaging a large number of models, each with different variability, then the variability will necessarily average out, leaving only the overall trend. Since the overall trend can be modeled pretty well as a straight line (especially over such short period), then the mulit-model mean will necessarily approach something like a straight line. This in turn means that the correlation with time will necessarily be close to 1 and your graph will be close to red.

    I really don’t see this as a “smoking gun” to attack the models.

  23. What I think would be much more informative than (Model vs Time) and (Actual vs Time) would be (Model vs Actual) and (Random vs Actual). IN other words, how well do the actual values correlate with the predictions — and it that result any better than simple rolling the dice to make a prediction? Presumably truly random data would have close to 0 correlation (but of course it would be above 0 in some places and below 0 in others). If the models are better than guessing, then the map should be a bit red overall (a bit of a net positive correlation). (And since there has been an overall warming of the oceans and a predicted warming, I will pretty much guarantee the map will indeed tend toward red).

    In fact, here every single model could be tested to see if it is better than random guessing. I suspect most would be.

  24. Bob,

    A priori I expect regions where (the oil) industry is leaking a little onto adjacent oceans/seas to warm, so I come to your maps with a bilt in urge to look for specific things. I’m OK with the Arctic warming — the Siberian rivers dmp 500,000 tonnes of light oil per year. However, I can’t see an expected warming in the Bay of Bengal — lots of new indstry p river – and why is there that little red area off Srinam?

    Pzzling.

    I’d love to see a set of trends for a combined Okhotsk, East Siberian, Kara, Barents, Laptev and Beafort seas. Better still would be an examination of the total areas pollted by indstrial runoff — a quantification of the Kriegsmarine Effect.

    JF
    Sorry about the typos, I have an intermitent keyboard falt and I’m in a bit of a hrry.

  25. HaroldW says: “I don’t know why you think the initial maps, showing correlation coefficient, can be used to compare trends…”

    I never said the maps could be used to compare trends.

    Once again, I wrote, Basically, what the maps are showing are the modeled and observed warming and cooling trend patterns.

    “Patterns” is the key word. It’s also included in the title block of my Figure 2. And like the regression maps, they’re showing different spatial patterns.

  26. HaroldW: PS, thanks for reminding me I should have been using regression and not correlation analysis in the maps. Even after I cut and paste the KNMI notes for Nick, it still hadn’t struck me, because to me it was always the “patterns” were wrong in the models. Thanks again. I’ll replace the maps.

    Regards.

  27. tjfolkerts says: “Each model presumably tries to estimate both the trend and the variability. Of course, predicting the specific year when an El Nino will hit or a volcano will erupt is impossible, so the model variability will necessarily be different from the actual variability.”

    This is primarily a hindcast. The timing of the volcanic eruptions is known. The models are also unable to simulate basic ENSO processes, which is why I linked Guilyardi et al in the post. Models also have difficulties with teleconnections, which is what causes sea surface temperatures outside of the eastern equatorial Pacific to warm during El Nino events. Models also do not replicate ENSO over this time period which is what determined how warm water was redistributed from the tropical Pacific during the major El Nino events of 1986/87/88 and 1997/98. In other words, the models assume the warming of sea surface temperatures over this period is caused by greenhouse gases, while there is no evidence of that in the sea surface temperature records.

  28. Nick Stokes says: “They model the whole physics. GHG concentrations are part of the forcing data. But in mdels too, many things determine SST. The relation to GHG isn’t an assumption – it’s a result.”

    The models assume greenhouse gases warm the surface and subsurface temperatures of the global oceans. There is no evidence, however, that greenhouse gases had any impact on the warming of the oceans over the past 30 years.

    Regarding the correlation maps, my apologies. My head was stuck on the spatial patterns being wrong, hence the use of correlation maps. HaroldW reminded me I should have been using regression maps, not correlation. Here’s the trends for the observations:

    And here’s the multi-model mean:

    The spatial patterns are still wrong. The reasons the spatial patterns are wrong are the models incorrectly assume greenhouse gases warm the oceans, and they do not simulate ENSO and teleconnections properly.

  29. Bob, to me it looks like CMIP5 ocean surface temperature is present at the Climate Explorer (faring not much better than CMIP3+, though …).

    If I go to the CMIP5 data and click on “Ocean and Ice variables” at the top of the page there is button for “tos” data for the multi-model mean for each of the four RCP runs.

    Or am I missing something?

  30. Bob (1:49 am)-
    Understood, and agreed that there are discrepancies in the patterns between predicted & observed. There will be similarities between the patterns of the correlation and regression maps, because mathematically they’re related. But the correlation map, for the multi-model mean especially where most of the variance is due to trend, tends to wash out the pattern, and partially conceals the trend. The regression map shows pattern and magnitude.

  31. Bob,
    “The models assume greenhouse gases warm the surface and subsurface temperatures of the global oceans.”
    You keep saying that. What evidence do you have? Where do they make that assumption?

  32. I’d love to see a set of trends for a combined Okhotsk, East Siberian, Kara, Barents, Laptev and Beafort seas. Better still would be an examination of the total areas pollted by indstrial runoff — a quantification of the Kriegsmarine Effect.

    The Kriegsmarine Effect may play a role, but the Arctic sea ice melt off the coast of Russia (almost all the Arctic sea melt over the last 10 years) is primarily due to the Russian Financial Crisis in 1998, and the subsequent shut down of most of the Soviet era heavily aerosol polluting industry especially in northern Russia and Siberia. Reduced aerosols + aerosol seeded clouds = increased summer insolation and increased sea ice melt, augmented by black carbon embedded in the ice. Hence the disproportionate melt of older ice.

  33. Jos says: “If I go to the CMIP5 data and click on ‘Ocean and Ice variables’ at the top of the page there is button for ‘tos’ data for the multi-model mean for each of the four RCP runs…Or am I missing something?”

    Nope. You’re not missing anything. The CMIP5 outputs come and go. When I was preparing the spreadsheet for this post a couple of weeks ago, the multi-model mean TOS output wasn’t there. Thanks for looking and finding out it has reappeared. That gives me an excuse to redo the post with the CMIP5 models outputs, which have not been better when I examined them in the past.

    Maybe KNMI will make the raw UKMO EN3 ocean heat content data reappear again soon, too.

    Thanks again.

    Regards

  34. Nick Stokes says: You keep saying that. What evidence do you have? Where do they make that assumption?

    What evidence? How about the IPCC’s AR4?

    From the IPCC AR4 Working Group 1 Summary for Policymakers. It’s from the fourth bullet-point paragraph under the heading of “Understanding And Attributing Climate Change” (page 10):

    “The observed patterns of warming, including greater warming over land than over the ocean, and their changes over time, are only simulated by models that include anthropogenic forcing.”

    The IPCC further clarified and reinforced that statement in Chapter 9 Understanding and Attributing Climate Change, under Heading of “9.4.1.2 Simulations of the 20th Century”, where they wrote:

    “Figure 9.5 shows that simulations that incorporate anthropogenic forcings, including increasing greenhouse gas concentrations and the effects of aerosols, and that also incorporate natural external forcings provide a consistent explanation of the observed temperature record, whereas simulations that include only natural forcings do not simulate the warming observed over the last three decades.”

    The above quotes were presented about 10 months ago in the following post, which was cross posted here at WUWT:

    http://bobtisdale.wordpress.com/2011/12/07/on-the-skepticalscience-post-pielke-sr-misinforms-high-school-students/

    Nick, this nonsense appears to have been repeated in Gillett et al (2012) “Improved constraints on 21st-century warming derived using 160 years of temperature observations”

    http://www.agu.org/pubs/crossref/2012/2011GL050226.shtml

    I’m sure there are more that were prepared for AR5.

  35. HaroldW: The following is the replacement for Figure 2:

    Looking better?

    Thanks for your help on this thread. Figure 2 was a last minute addition to the post. (More excuses: The sun was in my eyes and I tripped over a rock.)

  36. At what point do the two look alike, meaning we have the observations, and then the model looking similar? Does the current look like 1991 by model?

    The question is whether our model is “right” but accelerated wildly, or whether it is wrong from the git-go.

  37. Bob,
    Looks good to me. Thanks for increasing the contrast compared to the first try, it shows much more detail on the multi-model mean map. The earlier one showed the scale much clearer, though.

  38. The uncertainty in annual measurements of the global average temperature (95% range) is estimated to be ≈0.05°C since 1950 and as much as ≈0.15°C in the earliest portions of the instrumental record. The error in recent years is dominated by the incomplete coverage of existing temperature records. Early records also have a substantial uncertainty driven by systematic concerns over the accuracy of sea surface temperature measurements.

  39. The real fun starts when one compares longer trend with them models. Models somehow simulate the 1975-2005 warm period /especially for Northern Pacific and Atlantic, they fail miserably at Antarctic/, but are totally not able to capture 1945-1975 cooling and even more pronounced 1910-1945 warming /in case of Northern Pacific to be twice as large as the modern period/. Dunno what physics powers them 8-/

  40. gold account says: “The uncertainty in annual measurements of the global average temperature (95% range) is estimated to be ≈0.05°C since 1950 and as much as ≈0.15°C in the earliest portions of the instrumental record. The error in recent years is dominated by the incomplete coverage of existing temperature records. Early records also have a substantial uncertainty driven by systematic concerns over the accuracy of sea surface temperature measurements.”

    And that’s why you see me discussing satellite-era sea surface temperature records.

  41. Bob Tisdale says: October 9, 2012 at 7:03 am

    “How about the IPCC’s AR4?”

    You’ve got the logic of that wrong. The assumption they make, if you could call it that, is that GHG’s absorb outgoing IR. When that is factored in to the calculations, warming results from anthropogenic GHG additions. But they are not assuming the warming. They are asserting a result.

    Some of the codes are well documented. Here is CAM 3, for example. I was hoping you could point to where they “assume greenhouse gases warm the surface and subsurface temperatures of the global oceans”.

  42. Nick Stokes says:

    You’ve got the logic of that wrong.

    No, Bob has got it right.

    The assumption they make, if you could call it that, is that GHG’s absorb outgoing IR.

    The assumption that they make is that their specific conceptualization of “GHG’s absorb outgoing IR” along with all of the feedbacks, etc that they associate with that, is the correct way to fix the problem that their models don’t calibrate. And that is an assumption. An assumption that is proven wrong when the models so calibrated don’t predict. That last bit being the point of Bob’s post.

    When that is factored in to the calculations, warming results from anthropogenic GHG additions.

    Too much warming, overall, That being the net of way too much in most places, and too little in a few.

    But they are not assuming the warming. They are asserting a result.

    Asserting a result of their assumptions. Assumptions made because they assume warming.

    Multiple unknowns, multiple errors. The only way to solve that equation is to make multiple assumptions.

    ‘Global warming’ is the ‘god of the gaps’ for people who don’t think they are religious.

  43. Nick Stokes: Would you prefer I change models to modelers? That is, the sentence would now read: The MODELERS assume greenhouse gases warm the surface and subsurface temperatures of the global oceans.

  44. Nick Stokes: OR…would you prefer I change assume to “erroneously indicate”? That is, the sentence would now read: The models ERRONEOUSLY INDICATE greenhouse gases warm the surface and subsurface temperatures of the global oceans.

  45. Would you prefer I change models to modelers? That is, the sentence would now read: The MODELERS assume greenhouse gases warm the surface and subsurface temperatures of the global oceans.

    OR…would you prefer I change assume to “erroneously indicate”? That is, the sentence would now read: The models ERRONEOUSLY INDICATE greenhouse gases warm the surface and subsurface temperatures of the global oceans.

    Most accurate would probably be:

    The MODELERS assume that greenhouse gasses warm the surface and subsurface temperatures of the global oceans. The MODELERS assume that there are no changes in the earth’s surface albedo of the land or ocean (due to rapidly expanding plant and plankton growth caused, in part, by increased CO2).

    Because GC MODEL internal programming is not released for study and critique, it is impossible to know how these programming assumptions affect the MODELS’ projection of global temperatures over the 1000 years.

  46. Nick Stokes says: October 9, 2012 at 2:48 pm

    “…… the IPCC’s AR4?…… ” “… You’ve got the logic of that wrong. The assumption they make, if you could call it that, is that GHG’s absorb outgoing IR. When that is factored in to the calculations, warming results from anthropogenic GHG additions. But they are not assuming the warming. They are asserting a result….”

    This seems a very simple matter.

    Of all the various parameters, one or more parameters assumes (or are set as) a nett gain in energy in the system. Meaning after all the possible feedbacks, positive and negative, all the cycles and oscillations are taken into account, and all the spaghetti coding is rotated through, the settings of the “incoming energy parameters” are such that they outweigh the “outgoing energy parameter settings” and the sum is that the system has a nett gain in energy.

    A system that has a net gain in energy will warm up.

    Its all in setting the dials.

    It may be preferable that the model parameters were simply set in degrees C, and the coding left out.

Comments are closed.