We used NOAA’s CAMS-OPI satellite-era precipitation data in the post Model-Data Precipitation Comparison: CMIP5 (IPCC AR5) Model Simulations versus Satellite-Era Observations. Unfortunately, we weren’t able to isolate precipitation data for land and oceans with that dataset at the KNMI Climate Explorer. But there is another satellite-era precipitation dataset available at the KNMI Climate Explorer, and it is available with land and ocean masks. It’s NOAA’s Global Precipitation Climatology Project (GCPC) Version 2.2. Like the CAMS-OPI data, the GCPC v2.2 precipitation data is based on satellite and rain gauge observations.
Figure 1 compares the GCPC v2.2 precipitation anomalies for global land and ocean surfaces. The dataset starts in January 1979 and ends in February 2013. Both land and ocean precipitation anomalies have been smoothed with 13-month running average filters to suppress the monthly variability. Looking at the global ocean precipitation anomalies (red curve), it’s blatantly obvious that the primary causes of annual precipitation variations are El Niño and La Niña events. The 1982/83, 1986/87/88, 1997/98 and 2009/10 El Niño events are plainly visible, and you can also make out the lesser El Ninos in the early 1990s and mid-2000s. The trailing La Niñas are also evident.
Figure 1
The opposing relationship between ocean precipitation and land surface precipitation is also obvious. Land surface precipitation generally drops in response to El Niños and increases during La Niñas. There is also a strong dip and rebound in the land surface precipitation data starting about 1991 that may be a response to the eruption of Mount Pinatubo. Curiously, the ocean data does not show a similar response.
There also appear to be other factors contributing to the longer-term variations. The Atlantic Multidecadal Oscillation, the Pacific Decadal Oscillation, the Indian Ocean Dipole, and other coupled ocean-atmosphere processes are the likely suspects. But El Niño-Southern Oscillation (ENSO) is one of the primary factors governing precipitation and the water cycle on this planet, if not the primary factor.
And what can’t climate models simulate? ENSO. For further information about climate models failings when trying to simulate ENSO, refer to Guilyardi et al (2009). Climate models also can’t simulate Atlantic Multidecadal Oscillation, the Pacific Decadal Oscillation, the Indian Ocean Dipole, and other coupled ocean-atmosphere processes.
I thought of ending the post there. There really is no need to continue. But for those interested, Figures 2 and 3 compare the CMIP5-archived models to the land surface precipitation anomalies and the precipitation data over the oceans. Ocean and land masks are available through the KNMI Climate Explorer for the model outputs as well. As noted in the title blocks, we’re using the multi-model ensemble member mean of all of the models in the CMIP5 archive. As with the other model-data comparisons, were using RCP6.0 because it is the most similar to the widely used A1B scenario from earlier modeling efforts. And as a reminder, the models in the CMIP5 archive are being used by the IPCC for their upcoming 5th Assessment Report.
Figure 2
########
Figure 3
The climate models simulate increases in precipitation over both land and ocean surfaces, and the rates are very similar. But the data shows basically no long-term trend over the oceans and a decline over land. In more basic terms, according to the climate models, if manmade greenhouse gases were responsible for the changes in precipitation over the past few decades, precipitation over land surfaces would have increased, but the data show it has declined.
STANDARD BLURB ABOUT THE USE OF THE MODEL MEAN (With a Minor Addition that’s Underlined)
We’ve published numerous posts that include model-data comparisons. If history repeats itself, proponents of manmade global warming will complain in comments that I’ve only presented the model mean in the above graphs and not the full ensemble. In an effort to suppress their need to complain once again, I’ve borrowed parts of the discussion from the post Blog Memo to John Hockenberry Regarding PBS Report “Climate of Doubt”.
The model mean provides the best representation of the manmade greenhouse gas-driven scenario—not the individual model runs, which contain noise created by the models. For this, I’ll provide two references:
The first is a comment made by Gavin Schmidt (climatologist and climate modeler at the NASA Goddard Institute for Space Studies—GISS). He is one of the contributors to the website RealClimate. The following quotes are from the thread of the RealClimate post Decadal predictions. At comment 49, dated 30 Sep 2009 at 6:18 AM, a blogger posed this question:
If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?
Gavin Schmidt replied with a general discussion of models:
Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).
To paraphrase Gavin Schmidt, we’re not interested in the random component (noise) inherent in the individual simulations; so we use the average because we’re interested in the forced component, which represents the modeler’s best guess of the effects of manmade greenhouse gases on the variable being simulated.
The quote by Gavin Schmidt is supported by a similar statement from the National Center for Atmospheric Research (NCAR). I’ve quoted the following in numerous blog posts and in my recently published ebook. Sometime over the past few months, NCAR elected to remove that educational webpage from its website. Luckily the Wayback Machine has a copy. NCAR wrote on that FAQ webpage that had been part of an introductory discussion about climate models (my boldface):
Averaging over a multi-member ensemble of model climate runs gives a measure of the average model response to the forcings imposed on the model. Unless you are interested in a particular ensemble member where the initial conditions make a difference in your work, averaging of several ensemble members will give you best representation of a scenario.
In summary, we are definitely not interested in the models’ internally created noise, and we are not interested in the results of individual responses of ensemble members to initial conditions. So, in the graphs, we exclude the visual noise of the individual ensemble members and present only the model mean, because the model mean is the best representation of how the models are programmed and tuned to respond to manmade greenhouse gases.
CLOSING
We can add global precipitation anomalies over land and over the oceans to the growing list of climate model failures. The others included:
Greenland and Iceland Land Surface Air Temperature Anomalies
Scandinavian Land Surface Air Temperature Anomalies
Alaska Land Surface Air Temperatures
Daily Maximum and Minimum Temperatures and the Diurnal Temperature Range
Satellite-Era Sea Surface Temperatures
Global Surface Temperatures (Land+Ocean) Since 1880
And we recently illustrated and discussed in the post Meehl et al (2013) Are Also Looking for Trenberth’s Missing Heat that the climate models used in that study show no evidence that they are capable of simulating how warm water is transported from the tropics to the mid-latitudes at the surface of the Pacific Ocean, so why should we believe they can simulate warm water being transported to depths below 700 meters without warming the waters above 700 meters?
Surface temperatures and precipitation are the two primary metrics that interest humans. Will the future be warmer or cooler? And will it be wetter or drier? Climate models show no skill at being able to answer those two fundamental questions about climate change.



MThompson says: “Well, I guess I was misled by the chart title that states that the data was smoothed with a 13-month filter. (sic)”
There was nothing misleading about my presentation of data and model outputs—or in the text of the post that accompanies them. If you were misled, MThompson, it was due to your own failure to grasp what was presented. That is, you misled yourself. There are 3 graphs in this post, MThompson. Figure 1 notes in its title block that the data have been smoothed with 13-month filters—not 6-year filters as you mentioned in your first comment. My intent in Figure 1 was to highlight the ENSO components in the two datasets. The use of a 6-year filter as you recommend would have suppressed the ENSO-related variability—or aren’t you aware of that? Figures 2 and 3 contain the trends. Do they state in their title blocks that the data have been smoothed, MThompson? No. I presented the monthly data, because trend analyses of smoothed data often provide different results than the raw data, and I did not want to present skewed trends to my readers.
MThompson says: “You seem a little touchy, Bob. I know a lot of people jump on you, and that’s why.”
I’m not touchy. I try to write concisely, especially when dealing with someone displaying troll-like behavior. And rarely do “a lot of people jump on” me. What you think you “know” about my responses to your nonsensical comments is obviously incorrect.
MThompson says: “Really, all I want to point out is that everyone who wants to present data should follow best practices. This is not a criticism of your work, merely your presentation.”
There’s nothing wrong with my presentation of data. The problem lies in your ability to grasp what’s presented, MThompson.
In your earlier comment, MThompson, you wrote, “Your chart should omit six years of smoothed data from both start and endpoint.”
If that’s your understanding of “best practices” of data presentation, then your understanding of “best practices” is clearly lacking.
Have a nice day.
Jimbo says:
July 10, 2013 at 4:33 pm
This is the wrong approach. Ask for the list of successes since 1988.
==========
I have a stopped clock that has been right more that 18,000 times since 1988.
What is interesting is that the modesl predict increased precipitation in line with positive water feedback. While reality shows reduced precipitation which is in line with negative water feeback due to the increased partial pressure of CO2 reducing the amount of H2O in the atmosphere.
Yet, not a single climate models uses negative water feedback. And not a single climate model matches reality. While high school chemistry quite cleary taught that if atmospheric pressure remains the same and you increase the amount of CO2, then the amount of some other gas will be reduced, all esle remaining the same. The most likely gas to be reduced is water vapor, because it exists naturally as a solid, liquid and gas – something no other atmospheric gas can claim.
How can the average of bad guesses based on the wrong variables be anything other than their common errors? The non-noise output of the models is shared error.
fred;
Agree! The vast bulk of the atmosphere is composed of N2 and O2, non-radiative non-GHGs. They are unable to dispose of sensible heat except through evaporative loss from the top of the atmosphere. Only GHGs, especially H2O, can radiate energy to space. Hence, in their absence, the atmosphere would heat until it could “boil” away enough mass to counterbalance solar irradiation.
Hence GHGs are cooling agents which preserve atmospheric mass. The Warmist (and Luke-warmist) positions are 180° wrong. As usual.
Water rulez.
The strong anticorrelation between land and ocean rainfall is interesting.
It seems to exist in the unsmoothed data too.
Comparing the increases in ocean rainfall with UAH lower troposphere temperature it seems the ocean rainfall leads temperature rises by 3 to 6 months.
Is it possible to mask down the rainfall data to see if this happenning more specifically in the “cold tongue” region and the “NINO 3.4” region in the pacific?
Bob, n=13 smoothing filter should be constructed by averaging six data points to the left, six to the right and the index data point. This means that a smoothed value for the initial six and the final six observation points does not exist. This is not central to your thesis, but I care.
I do however agree with you that my personal attack (indicating that you seem a little touchy) was completely deserving of your retaliation impugning my motivation, education and intelligence. Thank you for putting me in my place.
Hello, I am new to this site. I am from the netherlands, and this grafic that is discussed here is from our national meteorogical institute. this institute is biassed and needs money .
But the most interesting about the grafic is that it is consistent with our winters in netherlands.
extremely colder winters have occured 1985 and 1987 and 1993, and 2013. so at the lowest landperspirationpoints. we can expect even colder winter next in 2014.
Still our national(left) government claims with help of naitonal tv and knmi that temperatures are above average and climat warming is a fact…only for us to have cold feet mid-summer now..and paying hefty taxes on energy and such.
Be ware…USA…do not let this happen to you.
Merrick (July 10, 2013 at 10:28 am) asked:
“Paul Vaughan, in what way do you think these data would “constrain” the models? The subtle impact that evolving earth parameters have on insolation are obvious, but small. Is there some other impact these data have on the models I haven’t considered?
Thanks.”
Reminder from NASA JPL: Temperature, mass, & velocity ARE COUPLED.
Be careful: Earth orientation parameters are precisely informative climate indicators, not climate drivers. Earth orientation parameters should not be confused with earth orbital parameters.
For one example, if the climate models are getting the evolution of seasonal wind fields right, the models will be able to accurately mimic the decadal volatility clustering of semiannual LOD. Model failure in this case would sharply highlight insufficient attention to temperature GRADIENTS.
Further advice:
“Apart from all other reasons, the parameters of the geoid depend on the distribution of water over the planetary surface.” — Nikolay Sidorenkov
MThompson says: “Bob, n=13 smoothing filter should be constructed by averaging six data points to the left, six to the right and the index data point. This means that a smoothed value for the initial six and the final six observation points does not exist. This is not central to your thesis, but I care.”
Look closely at the graph in Figure 1, MThompson. The 13-month running-average filters are centered on the 7th month and the 6 starting data points and 6 ending data points are not shown. Your complaint is unwarranted.
Instead of confronting, it’s always best to ask, MThompson. Stating that my graphs are misleading doesn’t sit well with me, as you have seen.
Regards
Andre says: “Be ware…USA…do not let this happen to you.”
Thanks, Andre, and welcome to WattsUpWithThat. Unfortunately, it’s already started happening to us here in the USA.
Regards
Andre: You referred to a graph in your comment, but one wasn’t shown. If you tried to upload it directly with your comment, that will not work. If it’s a graph that’s already online, simply provide a link to it. Example is my Figure 1 above:
http://bobtisdale.files.wordpress.com/2013/07/figure-15.png
If you’ve created the graph yourself and it resides on your computer, then you should upload it to a picture-sharing website like TinyPic:
http://tinypic.com/
They then provide an html address to the picture.
Regards
Gavin Schmidt replied with a general discussion of models:
Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’).
========
Noise is the simplest example of chaos. It is chaos of 1 dimension. It is the 2 body problem in orbital mechanics. A single object orbiting 1 attractor. We can solve this mathematically and statistical theory is almost entirely based on this model of reality. The orbit has a mean and a variance.
Weather however is chaos of many dimensions. It is the N body problem in orbital mechanics. It is an object orbiting many attractors. For example, we can easily see that temperature orbits attractors with periods of 24 hours and 365.25 days. Many other strong attractors are hinted at in the temperature records.
We cannot solve this mathematically. Traditional statistics deals poorly with chaos greater than 1 dimension because the average orbit and variance is largely meaningless. As you increase the scale the answer does not converge on a single mean. Rather there a many different means and averages all operating with different orbital periods.
For example daily average temperature varies wildly over the period of a year, and over the cycle of ice ages. The average temperature of the earth changes as you increase the time scale, while in traditional statistics you would expect the average to converge as you increase the time scale.
We are only just beginning to create statistical models of reality to deal with this complexity. For example, to reduce chaos to a stochastic process of different order, and thus apply standard statistical theory. However, we have only begun to scratch the surface. This problem is not in any way unique to climate science. In quantum mechanics the Schrodinger equation gives us an exact solution for the hydrogen atom, but can only approximate reality for helium and heavier elements.
what do you think about this work http://arxiv.org/ftp/arxiv/papers/1306/1306.0451.pdf moon could have an impact on monsoon and precipitations
@ur momisugly ferd berple
Your comments about chaos are based on theory and could be sharpened tremendously by looking more carefully at tuned aggregate properties of climate & earth orientation data.
…But the models are justified by THIS wikipedia site, right??? …or does “instrumental” here mean models as the “instrument” of “measuring”… temperatures??? http://en.wikipedia.org/wiki/Instrumental_temperature_record