UPDATE: 08/07 2:PM PST MLO responds with improvements to the CO2 data reporting
Approximately 24 hours after I published my story on the January to July trend reversal of CO2 at Mauna Loa, the monthly mean graph that is displayed on the NOAA web page for Mauna Loa Observatory has changed. I’ve setup a blink comparator to show what has occurred:
For those who don’t know, a blink comparator is an animated GIF image with a 1 second delay consisting only of the two original images from NOAA MLO. Individual image URLS for: August 3rd ML CO2 graph | August 4th CO2 Graph
Now the there is no longer the dip I saw yesterday. Oddly the MLO CO2 dataset available by FTP still shows the timestamp from yesterday: File Creation: Sun Aug 3 02:55:42 2008, and the July monthly mean value is unchanged in it to reflect the change on the graph.
[UPDATE: a few minutes after I posted this entry, the data changed at the FTP site] here is the new data for 2008:
# decimal mean interpolated trend
# date (season corr)
2008 1 2008.042 385.37 385.37 385.18
2008 2 2008.125 385.69 385.69 384.77
2008 3 2008.208 385.94 385.94 384.50
2008 4 2008.292 387.21 387.21 384.46
2008 5 2008.375 388.47 388.47 385.46
2008 6 2008.458 387.87 387.87 385.51
2008 7 2008.542 385.60 385.60 385.25
and here is the 2008 data from Sunday, August 3rd:
2008 1 2008.042 385.35 385.35 385.11
2008 2 2008.125 385.70 385.70 384.85
2008 3 2008.208 385.92 385.92 384.38
2008 4 2008.292 387.21 387.21 384.59
2008 5 2008.375 388.48 388.48 385.33
2008 6 2008.458 387.99 387.99 385.76
2008 7 2008.542 384.93 384.93 384.54
Here is the MLO data file I saved yesterday (text converted to PDF) from their FTP site.
Here is the URL for the current data FTP:
ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_mm_mlo.txt
I have put in a query to Pieter Tans, the contact listed in the data file, asking for an explanation and change log if one exists.
UPDATE 08/05 8:55AM PST I have received a response from MLO:
Anthony,
We appreciate your interest in the CO2 data. The reason was simply that
we had a problem with the equipment for the first half of July, with the
result that the earlier monthly average consisted of only the last 10
days. Since CO2 always goes down fast during July the monthly average
came out low. I have now changed the program to take this effect into
account, and adjusting back to the middle of the month using the
multi-year average seasonal cycle. This change also affected the entire
record because there are missing days here and there. The other
adjustments were minor, typically less than 0.1 ppm.
Best regards,
Pieter Tans
UPDATE 08/05 4:03PM PST
I have been in dialog with Dr. Tans at MLO through the day and I’m now satisfied as to what has occurred and why. Look for a follow-up post on the subject. – Anthony
UPDATE 08/06 3:00PM PST
A post-mortem of the Mauna Loa issue has been posted here:
http://wattsupwiththat.wordpress.com/2008/08/06/post-mortem-on-the-mauna-loa-co2-data-eruption/
– Anthony

My comments and questions may seem rather innocent if I am not interpreting the Pieter Tans explanation correctly. I don’t know, but I will make them anyway.
First, in the magnified chart that Anthony showed on the previous post, the delta was measured from Jan 8 to Jul 8. If their monthly average consisted only of the last 10 days of July, due to malfunctioning equipment, then measuring the delta from Jan 1 to July 1 (or from Dec 31 to June 30) should not be affected by this problem. And you would be measuring exactly the first half of the year. What would that show in comparison with the same span (Jan 1-July 1) in the past? Would it still be negative? If so, would it still be the first instance of a negative delta for the first six months of the year?
I wonder if am interpreting this sentence right:
“…we had a problem with the equipment for the
first half of July, with the result that the earlier monthly average consisted of only the last 10 days.”
Does that mean the equipment didn’t record anything before the last 10 days of the month, or does it mean that it did, but the data for the previous 20 days was bad, or failed to be considered in the monthly average? What exactly does it mean.
Were they unaware of that problem? If so, why did they release the data before fixing it? Did they really only find out last night that their monthly average was based on just 10 days of data?
I may be looking at this the wrong way, but maybe someone can enlighten me.
Jerker Andersson (00:09:47) :
“What is interesting is that it happens within 24h after they release a result that shows a new record low for first 7 months.”
I agree. That means is a big risk for quality mistakes in the change.
“If there is an error and they notice it, it must be corrected. In the process it lookes like they found errors on every month at least for the last 4 years.
They say the runned the adjusted program for all days all years, where it has put in “some data” (could be running mean values) these days when there was none. But a problem, I think, is if they have changed data according to new code which hasn’t going throuth a planned quality validation process of at least a weekor so. That’s not acceptable in any organisation which has a quality reputation to defend.
What I’ve read about the Mauna Loa record is that there data from the days when the wind is blowing from the volcano is omitted. I hope they have not included any of these data now! A fast change like this by some engineer is always very risky.
It also doesn’t seem likely that the first July data presented was only the 10 last days of the month, since the change when the first 20 days of data was included is only about 20 percent or so… It should be at least a 50 percent change if the 20 first days of data was actually first missed and includeed later. That’s because the continous change where the lowest values are at the end of the month.
Another thing: I also find it surpricing that Lucia suggested this July value was a calibration error which gonna be corrected. Isn’t callibration something which is done only once, and that re-callibration creates discontinuation in a record?
Finally: If Henry´s law is real (and it is) no one shall be surpriced if the CO2 concentration has to rise slower or to stop ries during a cooling phase in the coming years. No matter we yearly emitts about 3 percent of the naturally emitted CO2. (Only about 5 percent of the CO2 in the atmosphere is measured to be from fossile fuel.)
did they start that process yesterday also, reverifying those 4 years?
Did they actually manage to recalibrate, make new meassuremnents of all those samples from the last years in one day or is it a mathematical adjustment of the results?
It looks like that allmost all datapoints that sticks out a bit have been smoothed up AND down in order to create a straight line showing that there are less variability.
The adjustment does not seem to affect the overall CO2 trendline in any significant way though, just making it smoother.
So what errors did they find that gave both too high values when the CO2 level was above trendline and too low when it was below trendline?
Not that most of our elected reps are worth a damn anymore, but if you happen to have a congressional contact that is even mildly skeptical of NOAA’s activites, they need to see this. Further, they need to call up the head of NOAA to explain the fuzzy math that at least one of their observatories is using. We get nowhere without some political push-back… I have already sent a letter to my (one) receptive Senator…
Ooops!
I missed to remove text (by Jerker) in the last 4 paragraphs. From “did they start that process…” all lines shall be removed!
I’m really sorry.
It sounds to me like they are trying to adjust for measures taken in every July to compensate for rapidly decreasing measures. If they rapidly decrease, any changes from one month of July to the next month of July may not be consistent, since the data is selectively chosen for each day, meaning that there may be days when the criteria for selection (wind speed, direction, duration, etc) were not present. To make the data more consistent from one July to the next, it looks like some kind of statistical equation was coded in with “if that” and “if this” occurs, “do that” to it (code language to look for something in particular in the data and then adjust it). I do know that for rapidly changing and highly variable data, it is hard to compare one month to the next similar month (IE in this case “July’s”) because there is nothing similar about them. I don’t know what to think about the equipment problem.
What I took from what Pieter Tans said was that there were missing days in the record. That causes them to have to calculate a “fill” value. That “fill” value is probably a function of the average over time. When something is done to change that average over time, it changes those fill values and when those fill values change, the monthly average values change.
If you were looking at daily data instead of monthly data, you would only see certain days jump around with the change. But those few days do impact the average.
What bothers me most is that errors seem to get found and corrected that make the CO2 level “too low”. But I wonder how many errors there might be that make “too high” and aren’t found because rising CO2 is the expected result. Only when something is lower than it should be does anyone notice anything. How many times in the past were readings higher than they should have been? We will never know.
(I also missed to switch off italic in my first sentence. I’ve got a terrible head ace and feel sick 🙁 )
And were the missing days in previous months and years always in the second half of the month?
I have not studied the changes carefully, but it does seem odd if the alterations only seem to go one way (in the main).
It strikes me as peculiar that the only errors that NOAA discovers on its own are errors that, when “corrected”, bring the data in line with CO2 induced global warming. I’m not saying that they are deliberately manipulating the data, I’m saying that it is unusual that the only errors discovered are ones that are at odds with the accepted theory.
When I perform tests the data that most bothers me is the data that is exactly what I expect. That is the data that I double check. NOAA seems to be operating on a different principle.
If I were at a roulette table and I saw the casino owner’s wife win nine consecutive times I wouldn’t necessarily conclude that the table is rigged, but just the same I would move my gambling to another casino.
“the earlier monthly average consisted of only the last 10 days.”
It took them 30 years to figure out that monthly averages should involve division by the number of samples, not the number of days? Where’s their raw data?
If one make a zillion small adjustments that slowly add up to a large adjustment, one can look up innocently and say the adjustments are very small (i.e., classic “fallacy of the parts”).
Evan, it’s not at all unlike stories about bank employees who have been caught embezzling the rounded fractions of cents from thousands of accounts. Pretty soon you’re talking serious jack.
Were they unaware of that problem? If so, why did they release the data before fixing it? Did they really only find out last night that their monthly average was based on just 10 days of data?
Xavier, at the risk of sounding nasty, we ARE talking about government bureaucrats here, PHD or not.
Frank L/ Denmark (07:45:57) :
I’ve been observing that, too. All of the “adustments” of temperature and CO2 seem to go in one direction only: in support of AGW. All of them. What are the odds, eh?
The public’s perception of NOAA and GISS credibility is getting close to this: click
Let me see if I understand this. I goofed and missed that my meter was broke for 21 days so I need to take the average over the time of record and use that to fill the days I missed. ( I hope no one notices that I goofed.) Oops that change also changed every record back to 1974 for missed days. ( dang since the average is higher than the observations for beyond 94 back it brought the whole record up some.) HE HE HE the record shows that the CO2 problem is worse than we thought. ( take that skeptics)
The parts in parentheses is what I think they are thinking.
Anthony or Charles you may snip if desired. I just hate to be taken as completely ignorant even if my education only goes to an associate level. I can read and understand most of it.
Bill Derryberry
Due to the lack of disclosure, there is always a chance that the entire adjustment was because the self-described “skeptics” were running the with the July drop and this made people nervous enough to attempt a more accurate value using the method Dr. Tans describes.
However, because of the degree to which July 2008 is missing the actual data it is useless to draw any conclusions based upon it. We are going to have to wait for more data, something we would have had to do in the first place.
I have no doubt the AGWers monitor all the skeptic-aligned sites/blogs and would have screamed at NOAA the moment the old July 2008 value hit the skeptic radar screen. I suspect that someone has already told Dr Tans I am a
climate criminalskeptic (not of increasing CO2, however).Echoing several others:
It would be good if logs were kept of all historical adjustments.
Site history information probably should include:
1. Records of routine equipment checks and maintenance
2. Records of how and when data is taken
3. Explanation of change
4. Data adjustment details
5. Name of scientists involved
Stan: Glad to see you back.
Phineas Freak (the “smart one” of the Fabulous Furry Freak Brothers), in that immortal masterpiece, The Idiots Abroad, amasses a fortune by hacking into an oil company computer and diverting one cent from each barrel of oil sold. He then greatly compounds the wealth by founding a fake religion to launder his ill-gotten gains.
There are a few uncomfortable parallels here, actually . . .
Too bad for the self-described “skeptics”
Nah, that’s not it. It’s a pretty good day for this skeptic. I got to see, almost real-time, an inconvenient trend reversal in CO2 erased. We can’t be having CO2 concentrations following global temps down, now can we?
I imagine high-fives all over the top of Mauna Loa after this smoooooth moooove.
Denis Hopkins (10:35:15) :
“…it does seem odd if the alterations only seem to go one way…”
They must introduce the same change for rapid increases too (if they want to be seen as serious). This fast speed “quality risky” change wasn’t necessary due to their disclaimer, that their data for a month shall be regarded as preliminary data.
The explanation sounds as vague as can be. The had “…problems with the equipment” which wiped out the first 20 days of data? And they didn’t know this until yesterday?
And then: “This change also affected the entire
record because there are missing days here and there.”
HEY! It’s Hawaii… Lighten up you Haoles!
The increase from January to July is still very tiny, if global cooling is real, then the CO2 increase will continue to slow down, like it did in 1945 with the ENSO and PDO switch to negative. Still, the way NOAA adjusted (or “adjusted”) the data in such a quiet way so people would not notice is way too suspicious for me to ignore. I do not think NOAA is an organization that has much credibility anymore, considering their little “climate change” report propaganda, not to mention the very suspicious Smith and Reynolds global temperature measurements which are hard to know existence, and when they are mentioned, it is always in a very bad light.
Just a little heads up … the July data are now in NOAA’s NCDC Climate At A Glance page. The page says June still but the July data are there. It looks like this July was cooler than last year. Last year’s North American July average (according to NOAA’s adjusted USHCN v2 numbers) was 75.56 F and this year is 74.93 for a 0.63 drop from last year. Preceeding 12 month temperatures from 1998 to this year show a negative trend of 0.4 degrees/decade.
So the new data really is not the July data but the average of the data for the past … how many years? Boy oh boy, gotta love that climate science!
It’s perfect! It’s beautiful! It is an excellent example of data modeling. The numbers for July were lower than they expected. So they used a COMPUTER PROGRAM to extrapolate the numbers on the ASSUMPTION that the missing numbers (first half of July) would have been higher. Then they ran the COMPUTER PROGRAM for a few years back to IMPROVE those calculations as well – to APPLY THE MODEL.
THIS, my friends, is how computer models work. This is an excellent example of how the single most important part of the global warming hysteria – the computer models – change by ASSUMPTION and EXTRAPOLATION.
(my creds – 28 years in computing including work on several models)
The adjustments go both ways as seen on this plot: http://tinyurl.com/6qb3sg
The net gain is 0.19 ppmv over the entire 34 year record – this includes the July 2008 adjustment. If we back out the July 2008 adjustment of 0.67 ppmv, the gain becomes a decrease of 0.48 ppmv.
I really don’t see anyone here diddling with the data-set in order to amplify the AGW aspect.