New global water vapor findings contradict second draft of IPCC Assessment Report 5 (AR5)
Guest post by Forrest M. Mims III
I was an “expert reviewer” for the first and second order drafts of the 2013 Intergovernmental Report on Climate Change (IPCC) Assessment Report 5 (AR5). The names and reviews of all the reviewers will be posted online when the final report is released. Meanwhile, reviewers are required to not publish the draft report. However, the entire second draft report was leaked on December 13, 2012, without IPCC permission and has subsequently received wide publicity.
My review mainly concerns the role of water vapor, a key component of global climate models. A special concern is that a new paper on a major global water vapor study (NVAP-M) needs to be cited in the final draft of AR5.
This study shows no up or down trend in global water vapor, a finding of major significance that differs with studies cited in AR5. Climate modelers assume that water vapor, the principle greenhouse gas, will increase with carbon dioxide, but the NVAP-M study shows this has not occurred. Carbon dioxide has continued to increase, but global water vapor has not. Today (December 14, 2012) I asked a prominent climate scientist if I should release my review early in view of the release of the entire second draft report.
He suggested that I do so, and links to the official IPCC spreadsheet version and a Word version of my review are now posted near the top of my homepage at www.forrestmims.org.
The official IPCC spreadsheet version of my review is here. A Word version is here.
A PDF version (prepared by Anthony from the Word version) is here: Mims_IPCC_AR5_SOD_Review
A relevant passage from the AR5 review by Mimms (added by Anthony):
The obvious concern to this reviewer, who has measured total column water vapor for 22.5 years, is the absence of any mention of the 2012 NVAP-M paper. This paper concludes,
“Therefore, at this time, we can neither prove nor disprove a robust trend in the global water vapor data.”
Non-specialist readers must be made aware of this finding and that it is at odds with some earlier papers. Many cited papers in AR5 have yet to be published, but the first NVAP-M paper was published earlier this year (after the FOD reviews) and is definitely worthy of citation: Thomas H. Vonder Haar, Janice L. Bytheway and John M. Forsythe. Weather and climate analyses using improved global water vapor observations. GEOPHYSICAL RESEARCH LETTERS, VOL. 39, L15802, 6 PP., 2012. doi:10.1029/2012GL052094.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
vvenema says:
“December 17, 2012 at 1:57 pm
Dear MiCro, unfortunately it takes a bit more effort to compute a reliable climate signal.
First of all, your hourly data is partially from synoptic reports. These are meteorological reports for weather prediction, as this data is communicated fast across the globe for all weather services to use for weather predictions, it is not validated (well) and will contain many outliers due to measurement and communication errors. Based on the dataset you used (I think), the UK MetOffice has generated a quality controlled dataset HadISDin which these outliers are removed. Another problem with this dataset is that is had many stations in the US and little stations elsewhere.
Still better would be to use the measured minimum and maximum temperature from climate stations, you can find them, e.g., in the GHCN data of NOAA or the new dataset of the International Surface Temperature Initiative.”
I remove the outliers prior to using the data, this is on top of the clean up the NCDC already does.
“Then there are jumps in the dataset due to changes in the instrumentation. In the beginning of your dataset in 1929, the temperature was probably recorded by a pencil that was attached to a bimetal strip on a slowly rolling bin with paper and later digitized; this device was probably placed in a Cotton Region Shelter. Nowadays automatic weather stations are used, which are often mechanically ventilated. You will have to remove such jumps, which is called homogenization. Or if you do not want to do that yourself, you could use the homogenized dataset of the GHCN.”
If you noticed I started at 1950 to eliminate years with the fewest measurements. But, this is the raw data that warmists all use, prior to their adjustments.
“Then between 1929 and now the number of stations has changed enormously, as you also show. If there is only a small tendency for stations to be more to the North or to the South, closer to the coast or higher or lower up the mountains, this will influence your “average climate signal”. Thus you should either use only stations that measured all the time, or you should normalise them by subtracting the average over a fixed period of a few decades (the way CRU does), or you should compute the difference from year to year and average those (the way NOAA does).”
I’ve done this (which I mention in the text of the second link), and it makes no difference in the results. Also since the real goal was to get a difference signal (daily rise – fall), so the real work is with data that was taken on the same equipment at the same location with-in 24 hours of each other.
“To compute a global average climate signal, you cannot directly average over all data. There are many more stations in the industrialised countries. If you compute a normal average you would only see the climate signal in those countries. The best way to solve this problem is by interpolating over the entire land surface, e.g. by kriging, but even simple linear interpolation may be sufficient. Alternatively you can compute the average signal of all stations within grid boxes (for example 1×1 degree or 5×5 degree latitude and longitude) and then average over these grid boxes.”
The problem with interpolation is temps are not linear over area. For instance if you look at the google maps I generated (second link), there are no stations in the middle of the arctic, they are all on the coasts. I explicitly did not want to extrapolate temps, as I think it leads to errors. I can see 3-4 degrees difference between stations 40 miles apart.
Mr Mims,
My link was to the FRQ page of the NASA page you linked to. If it is out of date you should not have linked to it in the first place.
The paper you originally linked to, published in 2012, also states clearly that they have not yet analyzed the data to determine any trend. Your claim that since they have not analyzed the data yet means the trend does not exist is simply false. Not analyzed means not yet determined.
If this is our biggest complaint in a 1,000 page document the IPCC must have done a bang up job! Imagine how good the final document will be!
@Victor Venema:
You wrote “The IPCC does not do any research, it just reviews the existing research. ”
Do you sincerely believe IPCC does not do research? When they come up with charts like the ( Figure 2. Summary of the principal components of the radiative forcing of climate change.) on this link [ http://co2now.org/Know-the-Changing-Climate/Climate-System/ipcc-faq-human-natural-causes-climate-change.html ] by not doing research? They then without doing research claim CO2 (as the main human influence) is the main driver of climate change? .
Do you in fact believe this? Seriously, I want to know if you think this is credible and be known as saying as much.
Forrest M Mims says:
Your last parenthetical statement provides evidence that the water vapor feedback is acting as expected. You do understand, do you not, that the water vapor feedback predicts that the warming due to any cause, including CO2, will lead to an increase in water vapor. Hence, over short periods when the temperature trend is not robustly up because of the various other factors that affect temperatures on shorter time scales, the water vapor trend won’t be robustly up either.
It is also worth noting that the largest radiative impact of increased water vapor is predicted to be in the upper troposphere and there it is now well-documented that the water vapor does closely follow temperature trends and fluctuations there. (The result is particularly robust for the fluctuations, where the data is most reliable because it is least susceptible to artifacts that can affect secular trends over longer time scales.) See here for a discussion: and http://www.sciencemag.org/content/323/5917/1020.summary and http://www.sciencemag.org/content/310/5749/841.abstract?sid=8c0c3aea-6dec-4dfa-a907-8278fe76ef20 and see here for a discussion of why the data set from one outlier re-analysis that is being pushed by Ken Gregory et al. (and is plotted in his graphs in a way, in terms of relative humidity, that makes it difficult to compare to expectations anyway) is not believable: http://geotest.tamu.edu/userfiles/216/Dessler10.pdf
@Victor Venema: You wrote “Could you refer to the original publication that used only correlation and did so on such a short time series to arrive at such a strong conclusion?”
In answer to your question, of course the IPCC will not say, they used only correlation.
From 1970 through 2001 is a pretty short period of time, since there was cooling prior to around 1970.
From IPCC AR3 (2001) “[most] of the observed increase in globally averaged temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”[2] The IPCC defines “very likely” as indicating a probability of greater than 90%, based on expert judgement.”
Here’s a long term trend for you since longer term trends are more valuable. They use whatever correlates well with what they already expected or “know” and then use only supporting “research” to produce models (not research) to verify and prove the outcomes that they expected.
For reference, from the IPCC AR4:
“the IPCC’s Fourth Assessment Report says “it is extremely likely that human activities have exerted a substantial net warming influence on climate since 1750,” where “extremely likely” indicates a probability greater than 95%.”
joelshore says:
“It is also worth noting that the largest radiative impact of increased water vapor is predicted to be in the upper troposphere and there it is now well-documented that the water vapor does closely follow temperature trends and fluctuations there… where the data is most reliable because it is least susceptible to artifacts that can affect secular trends over longer time scales.”
Wrong, like almost every alarmist “prediction”. Global relative humidity has been declining for decades, and shows no signs of recovery. Tropospheric specific humidity is declining, too.
D Boehm says:
Which part of “see here for a discussion of why the data set from one outlier re-analysis that is being pushed by Ken Gregory et al. (and is plotted in his graphs in a way, in terms of relative humidity, that makes it difficult to compare to expectations anyway) is not believable: http://geotest.tamu.edu/userfiles/216/Dessler10.pdf ” was confusing to you in my last comment?
Hmm-m-m. joelshore quotes something I never wrote. No matter, the fact is that relative humidity has been declining for decades, falsifying the always-easily-deconstructed CO2=AGW conjecture.
These circular arguments always lack one vital ingredient: verifiable, empirical, testable scientific evidence. Catastrophic AGW is an evidence-free hand-waving scam intended to keep the climate alarmist gravy train from being derailed. Who should we believe, the planet itself — or self-serving alarmists like joel shore and his pals?
Bill Illis says:
December 15, 2012 at 5:42 am
The first thing I did with the AR5 leak was to look up the water vapour data and studies they were using.
I understood right away what this report was going to be about – data selection and the refusal to use any data which contradicts the global warming mime.
I downloaded the water vapour forecasts that are being used in the IPCC AR5 awhile ago. AR5 has water vapour up by 6.0% already and it is forecast to be 24% higher by the year 2100.
If we look at the actual observational data, however, it is FLAT. The ENSO is really the biggest factor in its variability. Water vapour was only 0.4 kg/m2, 0.4 mms/m2 higher than normal (25 mms/m2) in November 2012 and it is now on the way down to Zero again given its response to the ENSO.
Water Vapour, the ENSO and the IPCC AR5 forecast from 1948 to November 2012.
http://s16.postimage.org/qe1cvc3id/ENSO_WV_IPCC_AR5_Nov2012.png
Thanks for this very important comment – once again this is a story all about ENSO – water vapour is yet one more climate parameter – like global temperatures themselves – which meekly follow in step after the little boy and the little girl. Further confirmation of the thesis of Bob Tisdale that ENSO over the last half century has driven global temperatures – and with them, atmospheric water vapour. ENSO might just be about to take them both downhill for a while.
MiCro says: “If you noticed I started at 1950 to eliminate years with the fewest measurements. But, this is the raw data that warmists all use, prior to their adjustments.”
In other words, you did not homogenize your data? Then it is up to you to proof that this leads to more accurate results. The current understanding is that homogenization leads to more accurate trend estimates.
Mario Lento says: “Do you sincerely believe IPCC does not do research? When they come up with charts like the ( Figure 2. Summary of the principal components of the radiative forcing of climate change.) on this link [http://co2now.org/Know-the-Changing-Climate/Climate-System/ipcc-faq-human-natural-causes-climate-change.html ] by not doing research? They then without doing research claim CO2 (as the main human influence) is the main driver of climate change? .”
That claim comes from attribution studies, which are reviewed in the report, not from research by the IPCC. In the FAQ they do not give the references, but in the rest of the section close to the FAQ you can probably find these references.
Mario Lento says: “In answer to your question, of course the IPCC will not say, they used only correlation.”
They do not say so, because they do not do so. They make the claim of the relation between greenhouse gases and the temperature based on a physical understanding of the climate system. That is the main difference with people who think that the sun is responsible for the recent temperature rise, they might have a correlation, but they do not have a working mechanism. If you would like to make the sun a credible alternative hypothesis, you will have to find a physically possible amplification mechanism (the direct influence of the solar radiation is much too small).
D Böehm says: “Global relative humidity has been declining for decades, and shows no signs of recovery. Tropospheric specific humidity is declining, too.”
Is it too much to ask, not only to link to a picture, but to link to a text that explains the picture? Not everyone is so well informed as joeldshore, that would also allow the rest the relevance of the picture.
I would, for example, love to know whether your picture was for one station, the average over the land surface, or the average over the globe. As far as I know there is some indication that the relative humidity over land is decreasing as the temperature over land has increased more than the temperature over the ocean, which is the main source of humidity.
MiCro:
In response to vvenema, at December 17, 2012 at 3:15 pm you say
You seem to have misunderstood that extrapolation of temperatures enables the adoption of assumptions which can generate the results you want to obtain. Hence, such extrapolation is an essential part of the process for creating global and hemispheric temperature time series of use to e.g. the IPCC. Without the extrapolations there would be no possibility of doing this
http://jonova.s3.amazonaws.com/graphs/giss/hansen-giss-1940-1980.gif
And,before anybody asks, no, I have NOT forgotten sarc tags.
Richard
Victor Venema says:
December 18, 2012 at 2:08 am
“MiCro says: “If you noticed I started at 1950 to eliminate years with the fewest measurements. But, this is the raw data that warmists all use, prior to their adjustments.”
In other words, you did not homogenize your data? Then it is up to you to proof that this leads to more accurate results. The current understanding is that homogenization leads to more accurate trend estimates. ”
So, extracting a Min/Max from synoptic data isn’t okay, yet cherry picking stations (to homogenize for UHI) and making up data for areas that have not been measured is? What homogenization does is allow you to make up whatever trend you might want, especially once you allow proxy data to be mixed in.
If you think homogenization and extrapolation leads to better data, why don’t we just pick a single station’s data that we know is high quality, and just use that? That would be the ultimate homogenized/extrapolated chart then?
My secondary intention when I download all of this data (I have copies of GSoD, CRU’s, and Best’s) was to display what the actual data says, not some made up numbers.
I’ll also note that actual measurements show a flat trend in the Northern Hemisphere since about 1997-8, the tropics temps are almost flat, and the Southern Hemisphere while max temps are down some when compared to ~1965-70, Min temps are just down. And yes since I included all of the urban station I’ve included all of the UHI effects (which I’ve measured at my home compared to the local airports station), and I still have graphs that aren’t scary.
So of course homogenized (cough, cough made up) data is better, because the actual measurements will not inspire the correct amount of fear required to herd the public into submission.
The goal of the interpolation is to give every station a weight that corresponds to the area it is representative for. Stations in regions with little stations should be given a stronger weight as stations in regions with many stations. If you make a simple average over all stations, as MiCro did, you do not get a global climate signal, but basically one for the US and Europe.
If there are too few stations, such as at the poles, the interpolation method becomes important. An easy way out would be to ignore the poles, the way it is done in the CRU dataset. This is still a lot better than computing the climate trend for only the US and Europe.
MiCro, could you explain why you see homogenization as “making up data”? Which steps of the homogenization procedure do you object to exactly, which aspects of validation studies do you see as invalid? The linked blind validation study shows that homogenization makes the trend estimates more accurate. I would love to understand why you think you could get “any trend you might want” using homogenization. If you give some arguments, it would be easier to discuss.
MiCro: “If you think homogenization and extrapolation leads to better data, why don’t we just pick a single station’s data that we know is high quality, and just use that? That would be the ultimate homogenized/extrapolated chart then?”
1. There is probably no station without inhomogeneities. (The UHI effect is not the only inhomogeneity).
2. A single station shows much more variability as an average over a region. This makes it harder to see a small trend.
3. Climate variability is different everywhere. The variability of a single station is thus not representative for the global variability. This is also the reason why it is a problem that most of your stations are in the industrialised world, which is a very small part of the Earth, and thus why it is needed to put more weight on stations in data sparse regions.
vvenema says:
December 18, 2012 at 5:23 am
“The goal of the interpolation is to give every station a weight that corresponds to the area it is representative for. Stations in regions with little stations should be given a stronger weight as stations in regions with many stations. If you make a simple average over all stations, as MiCro did, you do not get a global climate signal, but basically one for the US and Europe. ”
Did you actually follow the links to google maps? There are a lot of stations across the globe. And while older station data is sparse, it’s still better than proxy reconstruction data which is considered valid data by warmists when they like what it says.
“MiCro, could you explain why you see homogenization as “making up data”? Which steps of the homogenization procedure do you object to exactly, ”
“Changes that happen at only one of the stations are assumed to be non-climatic. The aim of homogenisation is to remove such non-climatic changes in the data.”
Because this is wrong!
As I pointed out I’ve watched the temps I’ve measured myself, while at the same time comparing the data to a station a couple miles away, as well as the station in the near by (35miles) major airport. Let me give you another example, weather fronts can have large differences on each side, which each evolve differently because of the direction of travel of the different air masses.
As I’ve also pointed out, I’m doing anomaly (difference) analysis, since I’m generating data based on measurements from the same station. Since my goal wasn’t to do the same thing Best did (and got the same homogenized answer everyone else does), I was looking at daily cooling.
And in my opinion, I’ve shown that water vapor controls the temps, and however much co2 tweaks temps. Water vapor regulates temps, not co2. The proof is when weather produces clear skies, and low humidity over a couple of days, the night time drop in temps is 2-3 times the average when clouds and humidity are at “normal” levels.
Richard, encouraged me to look for such records, in 112 million records, I’ve found 258 such records with matching (rare)conditions:
Average Falling temp=56.6F
Average Rising temp=56.5F
Max Falling temp=60.3F
Max Rising temp=59.4
Average Dew point=16.98F
Average Mean Temp=41.875F
Average Visibility=88.62 miles
Max Visibility=1000
My criteria for selecting these stations records:
falling_temp_diff > 55
AND wind_speed 55
and dewpoint falling_temp_diff – 2
and rising_temp_diff < falling_temp_diff + 2
Somehow the criteria got munged.
it should read:
falling_temp_diff greater than 55
AND wind_speed less than 10
AND rising_temp_diff greater than 55
and dewpoint less than temp
and rising_temp_diff greater than falling_temp_diff – 2
and rising_temp_diff less than falling_temp_diff + 2
MiCro says: “Did you actually follow the links to google maps? There are a lot of stations across the globe. And while older station data is sparse,…”
Those stations across the globe are very important, otherwise one could not compute a global climate signal from the station measurements. This does not change that the station density in the industrialized world is much higher and this need to be taken into account somehow.
MiCro says: “Because this is wrong! As I pointed out I’ve watched the temps I’ve measured myself, while at the same time comparing the data to a station a couple miles away, as well as the station in the near by (35miles) major airport. Let me give you another example, weather fronts can have large differences on each side, which each evolve differently because of the direction of travel of the different air masses.”
Fronts moving over the area and many other phenomena will produce weather noise. The further the stations are apart, the larger this noise will be. There may also be biases between two neighbouring stations. That is why difference time series are only used to search for inhomogeneities and determine the size of the jump. After homogenization the data from the neighbouring stations are not the same, there will still be a difference in the mean, the noise will still be different, there many still be smaller inhomogeneities in the data and small differences in the local climate variability, only the clear systematic jumps as seen in the difference time series are removed.
MiCro says: “And in my opinion, I’ve shown that water vapor controls the temps, and however much co2 tweaks temps. Water vapor regulates temps, not co2.”
That is right, water is a much stronger greenhouse gas as CO2. The reason for worrying more about CO2 is that the water flows in the hydrological cycle are so large that humans cannot influence the humidity directly. The humidity we add simply rains out again. We can only increase humidity indirectly by increasing CO2, which increases the temperature, which allows the atmosphere to hold more humidity. Humans are able to increase the atmospheric CO2 concentration, because CO2 is not removed fast enough from the atmosphere by the oceans and the land.
vvenema says:
December 18, 2012 at 8:40 am
“We can only increase humidity indirectly by increasing CO2, which increases the temperature, which allows the atmosphere to hold more humidity. Humans are able to increase the atmospheric CO2 concentration, because CO2 is not removed fast enough from the atmosphere by the oceans and the land.”
This hypothesis is yet to be proven by measurements, and I think the data I’ve generated proves it to be false.
What I’ve shown in the data is that water vapor controls temps, period.
Contrary to the consensus, science doesn’t work based on Holmesian logic, you can’t prove a hypothesis with a simulator that’s coded as if it was fact, and the chart from AR5 showing measured temps falling out of the GCM model temp ranges is just more proof of how wrong.
vvenema says:
December 18, 2012 at 8:40 am
“Those stations across the globe are very important, otherwise one could not compute a global climate signal from the station measurements. This does not change that the station density in the industrialized world is much higher and this need to be taken into account somehow.”
Let me also note that when you analyze the tropics and southern hemisphere separately (which I do), it doesn’t matter how many stations are in the northern hemisphere.
vvenema says: We can only increase humidity indirectly by increasing CO2, which increases the temperature, which allows the atmosphere to hold more humidity.
Yes, that’s the theory, but obsevations do not support this theory. People like you are stuck in a theoretical rut.
if you wish to get out of that rut, then you must deal with observations:
CO2 increases, but H2O vapor does not, temperature does not. Now, let’s see if you are capable of accomodating your theory to observations.
vvenema,
You ask questions, but you don’t seem to be learning anything. Try to get out of your mental rut. CO2 is not a problem. If it was a problem, we would have evidence of it. But there are no empirical measurements showing any global damage or harm due to the rise in CO2.
In fact, the rise of CO2 has been beneficial. Agricultural yields are clearly increasing as a direct result. And although a couple of degrees warmer would make the planet more pleasant and livable, that is not happening despite the rise in [harmless, beneficial] CO2.
Try to get your mind out of the rut that 24/7/365 alarmist propaganda has caused. Those people have a self-serving agenda. They want to scare you with a false alarm, and it appears that they have succeeded.
Nothing unusual is occurring. Temperatures have been both higher and lower throughout the Holocene, when CO2 was much lower. Think for yourself. Doesn’t that indicate that CO2 does not have the claimed effect? The ultimate Authority — Planet Earth — is proving the alarmist crowd wrong. You can’t see that?
mpainter, are you referring to the funny hype of the climate ostriches about a climatologically short and cherry picked period of 16 years with no significant warming? Then the answer is easy: natural variability. Just look at the graph of the average temperature over the last 100 years and you will see that there were always periods in which the temperature growth stagnated and ones in which it went up faster. The relationship between greenhouse gas concentrations and temperature is only visible on long time scales.
I would also love to get out of that rut. If I could do so with good arguments, I would get a Nobel price. Can you offer any help? I hope to be able to proof soon that the temperature trend estimates as not as accurate as we think, which would be fun, but destroying the mechanism or just a reasonable alternative hypothesis would be the main price.
vvenema,
You are a lost soul, incapable of admitting that the Null Hypothesis has never been falsified. And your comments about natural variability are simply the old Argumenttum ad Ignorantium fallacy: “Since I can’t think of any other reason, then CO2 must be the cause of global warming.” Since the alarmist crowd has not got anything right yet, are you still willing to believe everything they tell you to believe? Really?
Good luck with your ‘Nobel price’. ☺
vvenema says:
December 18, 2012 at 11:23 am
“mpainter, are you referring to the funny hype of the climate ostriches about a climatologically short and cherry picked period of 16 years with no significant warming?”
My data shows the same flat temps for the last 16 years or so, and I used all of the actual measured data (well 114 Million records out of ~120 million), no cherry picking involved.
Dear mister Böehm, I guess you mean climate scientists with the term “alarmists”.
I am at least sure that they got one thing right. They told us that their statistical homogenization algorithms improved the quality of climate data and makes the estimate of trends more accurate. Thus I have organized a validation study and generated a dataset with known inhomogeneities and ask the climatologists to remove them.
This study was blind, so that these “alarmists” could not cheat. The results showed that homogenization improved the temperature data as promised. Unfortunately, but I will keep trying to find problems. It’s my job.