One day later: Mauna Loa CO2 graph changes

UPDATE: 08/07 2:PM PST MLO responds with improvements to the CO2 data reporting

Approximately 24 hours after I published my story on the January to July trend reversal of CO2 at Mauna Loa, the monthly mean graph that is displayed on the NOAA web page for Mauna Loa Observatory has changed. I’ve setup a blink comparator to show what has occurred:

For those who don’t know, a blink comparator is an animated GIF image with a 1 second delay consisting only of the two original images from NOAA MLO. Individual image URLS for: August 3rd ML CO2 graph | August 4th CO2 Graph

Now the there is no longer the dip I saw yesterday. Oddly the MLO CO2 dataset available by FTP still shows the timestamp from yesterday: File Creation:  Sun Aug  3 02:55:42 2008, and the July monthly mean value is unchanged in it to reflect the change on the graph.

[UPDATE: a few minutes after I posted this entry, the data changed at the FTP site] here is the new data for 2008:

#               decimal          mean    interpolated    trend
#               date                                                 (season corr)
2008   1    2008.042      385.37      385.37      385.18
2008   2    2008.125      385.69      385.69      384.77
2008   3    2008.208      385.94      385.94      384.50
2008   4    2008.292      387.21      387.21      384.46
2008   5    2008.375      388.47      388.47      385.46
2008   6    2008.458      387.87      387.87      385.51
2008   7    2008.542      385.60      385.60      385.25

and here is the 2008 data from Sunday, August 3rd:

2008   1    2008.042      385.35      385.35      385.11
2008   2    2008.125      385.70      385.70      384.85
2008   3    2008.208      385.92      385.92      384.38
2008   4    2008.292      387.21      387.21      384.59
2008   5    2008.375      388.48      388.48      385.33
2008   6    2008.458      387.99      387.99      385.76
2008   7    2008.542      384.93      384.93      384.54

Here is the MLO data file I saved yesterday (text converted to PDF) from their FTP site.

Here is the URL for the current data FTP:

ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_mm_mlo.txt

I have put in a query to Pieter Tans, the contact listed in the data file, asking for an explanation and  change log if one exists.

UPDATE 08/05 8:55AM PST I have received a response from MLO:

Anthony,

We appreciate your interest in the CO2 data.  The reason was simply that
we had a problem with the equipment for the first half of July, with the
result that the earlier monthly average consisted of only the last 10
days.  Since CO2 always goes down fast during July the monthly average
came out low.  I have now changed the program to take this effect into
account, and adjusting back to the middle of the month using the
multi-year average seasonal cycle.  This change also affected the entire
record because there are missing days here and there.  The other
adjustments were minor, typically less than 0.1 ppm.

Best regards,
Pieter Tans

UPDATE 08/05 4:03PM PST

I have been in dialog with Dr. Tans at MLO through the day and I’m now satisfied as to what has occurred and why.  Look for a follow-up post on the subject. – Anthony

UPDATE 08/06 3:00PM PST

A post-mortem of the Mauna Loa issue has been posted here:

https://wattsupwiththat.wordpress.com/2008/08/06/post-mortem-on-the-mauna-loa-co2-data-eruption/

– Anthony

Advertisements

247 thoughts on “One day later: Mauna Loa CO2 graph changes

  1. Hmm, this could run and run over the next few months.
    (given probable trends)
    Or should that be change and change again.
    “Jim where are you, we need you NOW..” – LOL …

    GREAT SPOT.

  2. It seems that almost every single data point in that diagram have been adjusted, not by much but still. What is interesting is that it happens within 24h after they release a result that shows a new record low for first 7 months.

    If there is an error and they notice it, it must be corrected. In the process it lookes like they found errors on every month at least for the last 4 years. did they start that process yesterday also, reverifying those 4 years?
    Did they actually manage to recalibrate, make new meassuremnents of all those samples from the last years in one day or is it a mathematical adjustment of the results?

    It looks like that allmost all datapoints that sticks out a bit have been smoothed up AND down in order to create a straight line showing that there are less variability.

    The adjustment does not seem to affect the overall CO2 trendline in any significant way though, just making it smoother.

    So what errors did they find that gave both too high values when the CO2 level was above trendline and too low when it was below trendline?

  3. ROFFFFL!!! It’s not like they adjusted this months data, they did it for years. And based on that data Anthony suggests some things to consider. [snip]

    Perhaps [Tamino] can explain all of that ever changing data. Is it a mistake that was caught years later? Is it like the photo of a flooded house (let’s make the CO2 rise before some one gets the idea that residence time isn’t really hundreds of years)?

    Damn the CO2 uptake of a cooling ocean! So is the annual downward slide of the sawtooth on the CO2 graph really all due to that northern hemisphere vegitation sucking CO2 out of the atmosphere? Or maybe it also has something to do with the annual cooling of them big ol’ oceans in the southern hemisphere cooling every southern winter?

    Maybe someday this science will get settled, but one thing’s for sure; it sure as hell aint settled right now.

  4. Pingback: STAY WARM, WORLD… Roger Carr « Stay Warm, World…

  5. I looked at random at the data of 2000 (outside the graph). I find similar changes, now in the 2. digit, for July aund August. Maybe you can run a difference program on all the data of the original files.

  6. I think it is important to remember that NOAA did put a disclaimer that the data is provisional for up to one year. The tells me that the correction for the July value was justified and had nothing to do with the direction of the error. However, the changes to data over one year old are quite odd and require further explaination although I would assume that there is a good reason and people should not assume nefarious motives.

    It also worth remembering that Ilinois Cryrosphere has had numerous data problems this year where they ended up dramtically reducing the amount of ice melt reported after the corrections.

    The bottom line is we want these agencies to report their data in a timely fashion but they might stop doing that if they get beat up in the blogosphere every time their instrument has a hiccup.

    REPLY: I give them the benefit of the doubt, which is why I sent a message asking for an explanation. At the same time, there doesn’t appear to be any valid reason to adjust data from 2004 and further back. If they publish data publicly, then change it with no notice (as they did today), then a public change log should exist, otherwise I think they are in violation of DQA.

    It seems haphazard to publish preliminary data on a weekend, and then change it on a Monday, all with no public notice. While nefarious motives may not be there, its just damn sloppy IMHO, and given this is the crown jewel for CO2 data I expect far better. -Anthony

  7. The changes go all the way back to 1974 (for those who haven’t seen the other thread).

    I sometimes theorize that NOAA & GISS must have time machines that allow them to get new readings 34+ years after the fact. I can imagine no other reason that policy makers base decisions worth hundreds of billions on an ever-shifting chimera of data.

  8. After looking at the data yesterday, I was a little surprised by the old July value (384.93) – 3.55 ppm less than May’s number (388.48).
    I couldn’t find such a big 2-month drop anywhere in the records.

    So maybe the new number is the correct one.

  9. Looking now at the blink comparator, it appears all the data has been adjusted.
    That’s weird.

  10. D. Quist,

    1984
    How far that seemed in the future.
    How far it seems in the past.
    How precious are our moments.
    How quickly do they pass.

    And what from a gov’ment poet
    but some convoluted gas.

  11. There’s something odd at nsidc as well, where their graph suddenly bends downwards – it was updated just a couple of days ago. You can see it at:

    This seems completely at odds with the marginal increase in arctic sea ice shown at:

  12. The first graphic is extent, the second is anomaly. They are not necessarily inconsistent depending on the baseline average. It is not good to see conspiracy in every new datapoint.

  13. Anthony,

    My comment was directed at other posters – not you.

    I agree on your comments about undocumented changes and DQA. Even if there is a good reason for changing the historical data it should have been declared clearly in the header of the file.

  14. I did a quick plot of the differences between the old and new means and originally posted it on the old thread.

    http://tinyurl.com/6qb3sg

    The data comparison is available here: http://tinyurl.com/6hhy3e

    Other than July 2008, the adjustments seem to radiate out from 1994, each oscillation growing larger as time progresses in either direction.

    I now wonder if NOAA had a calibration issue in 1994 that was corrected. This still not explain the change in July 2008.

    I look forward to NOAA explaining the justification for this sort of adjustment.

  15. Can I ask what is the point of putting out a new datapoint that is unadjusted and then putting out an adjusted one a few days later? Why not wait and just put out the adjusted one?

    And why adjust the entire graph’s datapoints in retrospect? Very weird.

  16. Right. They change the data the same day as the extent of the anomaly is pointed out. Have they _ever_ made such extensive changes in the past on the Mauna Loa data? And the change just _happens_ to make June-july 2008 now rank 3rd in greatest drop after 2004 by a hundredth of a ppm. How convenient. Unless there is a darn good explanation for these changes, the coincidence is just too close, especially with the changes happening right after Anthony Watts posts data. Pretty darn brazen. And scary. I always poo pooed the idea that true scientists would fudge data to match their preconceptions. Now I’m beginning to suspect I was too naive. Not only that, but these latest changes are being done between 12 am and 5 am east coast time. Why? Isn’t this the least bit suspicious for anyone?

    I think a serious investigation needs to be done. Maybe even a special prosecutor. Anyone have access to a congresscritter?

  17. jeez – isn’t anomaly just extent minus the 1979-2000 average? In which case extent and anomaly should be perfectly correlated.

    Or, to put it another way, why doesn’t the nsidc data correlate with the cyrosphere data for sea ice area at the following link.

    This doesn’t show the downward kink that the nsidc data shows. Or am I missing something? Is is area not the same as extent? Or is the anomaly calculated from a daily mean rather than the overall mean? If so, that is not obvious from the data presentation in the graphs.

  18. Call it what you want. Bend over backwards to give them the benefit of a doubt which they have never extended the other way – even when the consequence of their lack of doubt is the condemnation of the whole humanity.
    You can repudiate me if you want, but I’ll call it as I see it.
    What they did there is the OJ Simpson defence. As the comedian Wanda Sykes would say ‘you’ve got to stick with your lie’ .

  19. If there is data manipulation, it would not be a new phenomenon in the “scare” dynamic. In my studies of the first prominent UK scare – the “salmonella in eggs” scare in 1988-9, I caught out a leading government epidemiologist “reinterpreting” figures in a food poisoning outbreak to turn it from “unknown origin” into a definite egg case. Unfortunately for the man, he had already released the “uncorrected” data, to which we had access.

    In my review of 60 official “egg” outbreaks – peer reviewed for my PhD – only four could be reliably attributed to eggs. In three of the “egg” outbreaks, there had been episodes of illness originating from the focal premises before the egg-based food attributed as the vehicle of infection had been consumed and, in one of these, the outbreak (in a hotel) had started before the eggs attributed to the outbreak had been delivered to the premises.

    We also had enormous problems with determining sources of infection in poultry flocks, the official explanation being infected animal feed – despite multiple test results showing no such infection. When we reliably demonstrated airbone infection of poultry sheds from contaminated manure spread on adjoining fields, this was officially rejected because there was no provision for such category on the official epidemiology form and thus no mechanism for reporting it.

    Yet, on the basis of wholly flawed data, the UK suffered a “scare” which cost many millions and put an estimated 9,000 poulty keepers out of business. The “scare dynamic”, it seems, is alive and well.

    Of course, sometimes the data manipulation works the other way.

  20. Anthony,

    My last comment is inaccurate in terms of when the new data was posted. Turns out since I had accessed that ftp page earlier in the day, my computer was looking at a cached copy, not the new data set. Once I hit my refresh button, the new data file popped up.

  21. The NOAA site is clear that they adjust data historically.

    “These values are subject to change depending on quality control checks of the measured data, but any revisions are expected to be small.”

    I think the important part is that there needs to be a public change log to keep the data credible as well as provide access to historical data sets so that comparisons can be made.

    Trust but verify.

  22. Now here is something odd that perhaps a statistician can address. Why should the difference between the old and new data sets be a specific pattern?

    I compared the old and new data sets between July 2006 and July 2008. I looked at only the seasonal correction numbers. Here are the differences between the old, August 3rd dataset and the new, August 4th dataset in hundredths of a ppm.

    Isn’t it pretty convenient that prior to 7/08 the pluses virtually cancel out the minuses? And what a nice pattern. Every plus is followed by a minus. Why?

    This looks like the data was fudged, quite frankly.

    comparing old versus new dataset

    7/06 plus 3
    8/06 minus 1
    9/06 plus 6
    10/06 minus 2
    11/06 plus 3
    12/06 minus 1
    1/07 plus 1
    2/07 minus 5
    3/07 plus 11
    4/07 minus 11
    5/07 plus 14
    6/07 minus 11
    7/07 plus 3
    8/07 minus 3
    9/07 plus 6
    10/07 minus 6
    11/07 plus 3
    12/07 minus 6
    1/08 plus 7
    2/08 minus 8
    3/08 plus 12
    4/08 minus 7
    5/08 plus 13
    6/08 minus 25
    7/08 plus 71

  23. So, why does ALL the data change?

    Have they suddenly discovered the Hansen method of adjustment?

  24. Pingback: Is NOAA Monkeying With Data? « Thunderpig’s Mirror

  25. While we (the world) deserve a clear explanation/understanding of what these adjustments are and why they were made, this is one dataset where the adjustments don’t yet bother me too much as long as the past trend line does not alter appreciably as a result. The reason it doesn’t bother me is that if we are, indeed, beginning to see a downtrend in the CO2 level measured as Mauna Loa, then they won’t be able to mask it for very long, since CO2, unlike temperature has been claimed to be monotonically increasing. Any sustained downtrend over the next year or so would break that cycle and change the whole ballgame. On the other hand, if the overall trend resumes an upward direction, then the downtick would have appeared to be an outlier and wouldn’t have survived as the biggest nail in the coffin of the AGW hysteria. So, as usual, we’ll have to wait and see. In the meantime, I look forward to reading whatever you hear back for an explanation.

  26. These folks should should invest in a software engineering technique called “version control”,
    considering how soft and malleable all this historical data appears to be.

  27. Darn, and I thought less driving had stopped global warming. Oh well, Pelosi needs to stick to her no more drilling and get gas up to $5 a gallon. I wonder if she can somehow stop the mining of coal?

  28. Looking forward to the log which will explain changes in Mauna Loa monthly means back to May 1974. I wonder if there is a separate explanation for each month’s adjustment. Also, how many historic data points HAVE been adjusted?
    Thanks,
    Mike Bryant

  29. malcolm (02:04:28) & jeez (02:11:06)

    The point about NSIDC is well taken; there had been several days of a pronounced uptick (showing a slower melt rate) on the melt line up through 31 July. When the line was revised 1 Aug the direction it took suddenly changed to show a quickening melt rate and the change was pushed back by several days.

    Perception is reality — if it appears that data is being continually adjusted, it may lead to the perception that science is unreliable and that the data upon which policy decisions are to be based is flawed. It is not a huge leap for the perception to grow of scientists having their own agenda and cooking the books to realize that goal.

    For the Mauna Loa data to be up for possible adjustment for a year seems rather reasonable. But now, when decades of data are tweaked, a questioning perception may take hold. I patiently await a reponse to Anthony’s query about the changes.

  30. aaah this is the real world BOM do it here all the time – cold day – ooops the temperature just jumped 2 degrees at observatory hill (all the while temps in the rest of sydney were going down) – and then back again – quite funny really when you get to the point that you cant trust those who record the weather – i just look at the guage in my car now – and its almost offensive now to talk about the cold – warming is the new black ROFL

  31. Combine this with a blinking graph on the hockey stick too! You may go down in history with Charles Johnson of LGF with the ‘throbbing memo’ of Dan Rather fame…er, infamy. Good catch! Keep em honest out there.

  32. I’ve always been a “show me the data” kind of guy when it comes to this global warming caused by man hype. It’s why I love this site. What really concerns me is that the government is massaging the data to support some obscure agenda.

    That’s huge if you think about it. I’m anxious to hear their response.

  33. Looks like a pretty typical change to a smoothing algorithm. Didn’t know they used one but maybe they take multiple data points each day and filter them.

    If that’s what they did, then a good question would be “why now?”.

  34. Is this really good or bad ?

    I mean, I can see some who will claim that this is why there has been a drop in global temperatures (i.e. tail wagging the dog).

  35. If any think that this is grasping at straws, just remember that this is a bellwether site for the AGW hypothesis. So it deserves all the scrutiny it gets, and has to live up to the strictest standards because of it.

  36. Anthony,

    As much rancor exists between you and Tamino, that comment by Mike C above might be a bit beyond the bounds of civility that you usually accept at this site.

    As far as CO2 trends goes, it seems like Lucia was spot on about the calibration error.

    REPLY: Edited, thanks for pointing it out. I’m waiting for explanation from MLO, and while I can see calibration error in this past month as reasonable, the changes back down the entire dataset to 2004 don’t seem reasonable for calibration error.

  37. There is also a suspicious graph change at NSIDC: http://www.nsidc.com/arcticseaicenews/

    It seems some of last months data has been adjusted downward. Also there is a very suspicious almot linear drop up to the 4th August from about 28th July. A blink comparison with the same graph further down the page (although that graph is only up to 1st August). This would show a definite data adjustment for late July.

  38. One explanation for all these changes is that they are using an error correction algorithm that uses previous trends to determine if there might be an error. The more the data varies from the previous annual/seasonal trend, the more the raw data will be adjusted.

    Each year in the newest post-adjustment graph looks more similar to other years after the changes.

    If that is what they doing, then we will never have any kind of variation in the “march of the CO2”. I imagine they have been doing this all along whenever a strange data point emerges.

    The more you look at the historical datasets in the climate science field, the more “rewriting of history” you will find and the less faith you will have in any of data.

  39. I am not a conspiracy theory supporter. Hopefully, the changes documented above will be shown to be necessary and done for good reason.

    However (although this is a little off-thread), lest anyone think that science will easily carry the day in overturning the AGW meme, I offer this piece from the Boston Globe by Dr. John P. Holdren of Harvard and Woods Hole. It is a reality check that AGW won’t go quietly into the dark night.

    http://www.boston.com/bostonglobe/editorial_opinion/oped/articles/2008/08/04/convincing_the_climate_change_skeptics/

    Excerpts:

    “THE FEW climate-change “skeptics” with any sort of scientific credentials continue to receive attention in the media out of all proportion to their numbers, their qualifications, or the merit of their arguments. And this muddying of the waters of public discourse is being magnified by the parroting of these arguments by a larger population of amateur skeptics with no scientific credentials at all…”

    Dr. Holden concludes:

    “The extent of unfounded skepticism about the disruption of global climate by human-produced greenhouse gases is not just regrettable, it is dangerous…Those who still think this is all a mistake or a hoax need to think again.”

  40. Arguably, data are the readings taken from the instruments, which should be presumed to be properly calibrated. Once data points are “massaged”, they are no longer data points, but rather a collection of numbers which represent what the data points would/should have been had the data points been collected properly from properly calibrated instruments; and, to my mind at least, the numbers in the number places which have been changed by the “massaging” are no longer significant.

    I understand that, in the case of climate, it is not possible to go back to the lab, recalibrate the instruments and rerun the experiment. However, that is a compelling reason to assure that the instruments are sited and calibrated properly, not a reason to “massage” the date to assure that it is “on message”.

  41. Unknown and unsubstantiated data adjustments are the way of the future.

    Once government started on the “pay more in taxes so government could pretend to control the weather” path, we were cooked. Government as an honest broker no more, the day increasing taxes and controlling behavior becomes the goal of science. Want proof, all you have to do is read the NASA site, AGW tripe and BS they pass off as PROVEN FACT is just eye popping.

  42. Regarding the changed chart at the NSIDC, there was actually two different charts shown on the page announcing the end of July sea ice extent. One going up and one going down.

    http://nsidc.org/arcticseaicenews/index.html

    I believe the downturn in the chart just reflects newer data (although the one going up is labeled August 1, 2008 and the release came out on this day.)

    You can open both these charts in separate windows and by clicking back and forth, get a before and after animation like Anthony did for the CO2 data. The downturn is a longer line.

  43. We all try not to believe in any “conspiration theory”.

    But lets be honest, ok? Ill say it as it is, maybe say what many thinks, but many dare not to say because we all want to appear serious and well balanced:

    Take the last hundred “adjustment” we have had in the last years.
    I mean adjustments that are making a difference in the debate pro/contra AGW beliefs.

    How many of such adjustments happends to favour IPCC/AGW/Alarmists ideas?

    I have NEVER seen a single big adjustment done by the alarmists that does not end up being a support for their opinions. NEVER.

    But lets say i have missed some, and its only 90 % of their corrections that support themselves.

    But if they where honest, adjustment should in 50% of the cases support them and in 50% support the sceptics.

    Adjustments are result of bad equipment etc, should not have a tendency to allways support one viewpoint.

    How big statistically is the chance that if you through a dice 100 times, that 90% of the times you get 4,5 or 6? Statistically its vertually impossible.

    Thats why i believe its fair to finaly conclude: “So many adjustments that just happend to allways support the ones who makes them? Its not statistically possible and thus i find it hard to believe”.

    This does not mean that you are a conspirasy freak. It just means that you have the ability to think.

  44. Any bets on how soon data from satellite surface temperature measurements are put through the Hansenizer?

  45. Although the correction applied to the Mauna Loa CO2 reading for July may be due to a calibration or other error it still does not change the fact that the the Annual January July differential increase for CO2 content for 2008 is the lowest in the 50 years that data for January and July have been recorded. During that period, from 1959 to 2008, the CO2 differential increase was less than 1ppm in 10 years, between 1-2 ppm during 34 years, between 2-3 ppm for 5 years and exceeding 3 ppm in one year that being 1998 when the January July differential recorded was a whopping +3.6ppm. The next lowest year after 2008 was 2004 when the differential was +0.4ppm which is nearly 6 times greater than the 2008 differential increase of +0.07 ppm. This relatively small increase is thus significant although not nearly as much as had the January July differential been negative. Despite high fuel prices I doubt that mankinds CO2 emissions have dropped off significantly in the past 6 months. This clearly suggests that natural forces play a much larger role in regulating the atmospheric CO2 content than do anthropogenic CO2 emissions.

  46. Ed Reid,

    What is possible is to collect duplicate or triplicate samples. Alternatively they could do something similar to what they will be doing in Bejing pretty soon which is to divide the single urine sample into two or more subsamples and then if a spurious result is returned by the analysis the the instruments can be rechecked and the duplicate or sub sample can be tested to see if the result can be duplicated or if it is substaintially different suggesting that a calibration or some other type of error occurred.

  47. I think that it is clear that they increased the smoothing of the data. Notice how all of the peaks and vallies in the data all decrease in magnitude from linearity.

    The question is, why the sudden need for increased smoothing of the data?

  48. It’s interesting if you use the annual average mauna loa data to compare the annual ppm change in CO2 concentrations to estimates of global population and fossil fuel CO2 emissions in 1977 and 2007. With all three data sets indexed to 1.0 in 1977, you get values of 1.7 for population growth, 1.9 for carbon emissions and 0.8 for the annual PPM increase in CO2. Therefore, carbon energy use almost doubles with population , yet the rate of increase in CO2 decelerates to less than it was in 1977.

  49. A reply from Dr Tans to my earlier email –

    —–Original Message—–
    From: Pieter Tans [mailto:Pieter.Tans@noaa.gov]
    Sent: Tuesday, August 05, 2008 10:37 AM
    To: Denise Norris
    Subject: Re: Mauna Loa CO2 trend

    Denise,

    The reason was simply that we had a problem with the equipment for the
    first half of July, with the result that the earlier monthly average
    consisted of only the last 10 days. Since CO2 always goes down fast
    during July the monthly average came out low. I have now changed the
    program to take this effect into account, and adjusting back to the
    middle of the month using the multi-year average seasonal cycle. This
    change also affected the entire record because there are missing days
    here and there. The other adjustments were minor, typically less than
    0.1 ppm. Too bad for the self-described “skeptics”.

    Pieter Tans

    Denise Norris wrote:
    > Dear Dr Tans,
    >
    >
    >
    > I just noticed NOAA upward adjusted the Mauna Loa CO2 for July 2008, but
    > I could not find a explanation on the website. As CO2 is of great
    > interest to a number of people, is there a specific reason for the
    > adjustment? The original value of 384.93 created a little bit of a stir
    > amongst the skeptics.
    >
    >
    >
    > Thank you,
    >
    >
    >
    > Denise Norris
    >

    ——————————

    and my response:

    —–Original Message—–
    From: Denise Norris [mailto:xxxxxxxxx@xxxxx.xxx]
    Sent: Tuesday, August 05, 2008 11:37 AM
    To: ‘Pieter Tans’
    Subject: RE: Mauna Loa CO2 trend

    Dr Tans,

    Thank you for the clarification of the July data adjustment.

    How frequently is the data revised backwards to the extent of 1974? What sort of event would require that sort of adjustment?

    I know all this may be a bother, but Mauna Loa is the crown jewel of the CO2 monitoring world and when there is even a minor adjustment, I feel transparency is the best way to head off needless controversy.

    Thank you.

    Denise Norris

    REPLY: FYI part of that response was cut and pasted to me also, except for the last line, and apparently other bloggers got the same response you and I did. – Anthony

  50. Just an observation:

    Where the data points meet between the trend line and the seasonal line, the adjustment is far less than between where the lines don’t touch.

  51. ““The extent of unfounded skepticism about the disruption of global climate by human-produced greenhouse gases is not just regrettable, it is dangerous…Those who still think this is all a mistake or a hoax need to think again.”

    Re-education camp for you, tovarich! Thinking is dangerous! Sit down, shut up and believe what you’re told.

  52. Anthony – perhaps I’m missing something but the latest date’s numbers weren’t the only ones to change based on your blink comparison… watch how many of the spikes appear to suddenly level off – the excuse that the equipment broke for a few days doesn’t hold up across the long term differences in that blink comparison.

    REPLY: I spotted that immediately when I made the blink compare, but I’m waiting to hear additional explanations before making my own comments on it.

  53. The explanation has been posted elsewhere, and is simple and proper.

    They don’t get daily readings from Mauna Loa, for various reasons. This has been true throughout the entire record. Mauna Loa is designed to measure well-mixed air arriving on the trade winds, and on days when the wind isn’t blowing, they don’t use the measurement to avoid measuring local air, for example. Some days are missing from nearly every month.

    For monthly averages, they have been simply averaging the daily readings they get for that month. In July 2008, they only got readings for the last 10 days of the month. Since CO2 levels fall during July, the last 10 days have a lower average than the entire month would have. Apparently, this was the first time they had such a skewed set of daily averages to apply for the month, it made a substantial difference to the monthly average, so they (quite properly) decided to correct it – the average of the last 10 days of July, when there are falling values for all of July, is NOT the appropriate number to use for the average for all of July.

    They applied an algorithm to adjust the monthly average to account for which days in the month are missing, and they (again, quite properly) applied it to their entire record. This would make almost no difference in months where the slope of the record is near zero, or where there are missing values near the middle of the month. It will make a bit of difference when missing values are near the ends of the month. And it leads to more accurate values for every month.

    Thus the changes you see throughout the entire record.

    REPLY: Another new name Lee? What is that, six now?

  54. I have thought about Dr Tans’ explanation for the July adjustment and accept that the original value would have been skewed if only the last 10 days of July had actual data and the rate of change accelerates during the month.

    Basically, the July data point is a composite of the prior July data with the last ten days.

    Now the interesting point which I just realized is that his ‘change to the program’ then back-filled missing days for the previous 34 years thereby adjusting the values.

    This is the sort of adjustment that DQA is suppose to cover.

  55. Bill Illis (07:35:48)

    Thank you sir!

    That time series graph that is part of the Aug 1 report of NSIDC (Fig 2) is actually showing the data as of 31 July and it clearly shows that as of the end of July the melt rate had slowed.

    It’s good to know I wasn’t imagining it.

    I missed saving the graph for 1 August (posted 2 Aug), but I have saved images for 2,3, & 4 August, which I’ve saved as (July 31 values saved as August 00) August_00, 02, 03, & 04.

    Now, when I flip between Aug 00 & 02, I can clearly see how changes were made several days back into July.

  56. If that response is from a “Dr”, assuming that would be a Ph.D. scientist, his last comment brands him as nothing more than a demagogue, not a scientist. Shame on him.

  57. “Too bad for the self-described “skeptics”.”

    Pointless condescension towards people interested in what you’re working on. How classy.

  58. How convenient: “This change also affected the entire record because there are missing days here and there”. How did they figure it out so fast? And if they knew that they didn’t have most of the data of July, why did they go forward with uncomplete data?
    This is quite the answer I hear everytime someone has screwed up. They should bring public all the data, and say which days were inserted. It will not resist a statistcal analysis if data was faked.
    Ecotretas

  59. The Watergate rule is now in effect.

    It is not the crime that is the big problem, it is the cover-up.

    Let the “explanations” begin.

    This should be most entertaining.

  60. Redneck,

    Understand that. However, all the way back to 1974? All retested in 24 hours? Strains credulity!

    Also, the “crown jewel” of the AGW crowd is analyzed by a single piece of equipment, which is out of service for 20 days? While I know it is certainly possible, I find it exceedingly strange.

  61. I don’t see how a couple of week’s missing data can cause adjustments going back for 35 years. these algorithms which readjust all the way down the line to “preserve the story” are a bit much. (Even if all calculations are carefully explained and explained–which they are NOT.)

    If one make a zillion small adjustments that slowly add up to a large adjustment, one can look up innocently and say the adjustments are very small (i.e., classic “fallacy of the parts”).

    This is getting to be like being licked to death by a St. Bernard.

  62. They need to publish their data adjustment algorithm at the daily raw data, so an independent analysis can be done of the algorithm and the effect of missing data points. Seriously, this should not be controversial.

  63. Thank you Dee!

    a) “Too bad for the self-described “skeptics”.”

    b) EGAD! I hate to sound snippy… but I guess we know how Dr. Tans feels personally about what the data “should” say.

    c) As asked before… why did these adjustments need to start right now? I’m certain they’ve had intrument failures/issues in the past, right?
    HINT: see a) above.

  64. Anthony,

    In the letter sent to you by Dr. Tans, he states,

    “The reason was simply that we had a problem with the equipment for the
    first half of July, with the result that the earlier monthly average
    consisted of only the last 10 days. Since CO2 always goes down fast
    during July the monthly average came out low. I have now changed the
    program to take this effect into account, and adjusting back to the
    middle of the month using the multi-year average seasonal cycle.”

    So, correct me if I’m wrong, but what he is saying is that they only had measurements for the last ten days of the month, and the average of these actual measurements was the original 384.93. The new “measurement” adjusts this figure with some kind of interpolation, thus “smoothing” the data. This “smoothing” interpolation now extends back through the entire record. It would be worthwhile to see how this was done.

    To make this whole thing more transparent, I think that Dr. Tans needs to release the entire Mauna Loa record of actual measurements, and that they should update their website to post the data on the net in real time. This way people can do their own interpolation to the data without NOAA as an intermediary.

    Since CO2 drops in July and the monthly average of actual measurements was 384.93, this implies that the level on July 31st may have been actually lower than 384.93.

    Kim

  65. Tom Barney: The Mauna Loa data (as we have seen) is strongly affected by ENSO (which affects uptake by the Pacific ocean), so the difference in ppm increase between 1977 and 2007 may be becuase of and El Nino instead of a LaNina?

  66. My comments and questions may seem rather innocent if I am not interpreting the Pieter Tans explanation correctly. I don’t know, but I will make them anyway.

    First, in the magnified chart that Anthony showed on the previous post, the delta was measured from Jan 8 to Jul 8. If their monthly average consisted only of the last 10 days of July, due to malfunctioning equipment, then measuring the delta from Jan 1 to July 1 (or from Dec 31 to June 30) should not be affected by this problem. And you would be measuring exactly the first half of the year. What would that show in comparison with the same span (Jan 1-July 1) in the past? Would it still be negative? If so, would it still be the first instance of a negative delta for the first six months of the year?

    I wonder if am interpreting this sentence right:

    “…we had a problem with the equipment for the
    first half of July, with the result that the earlier monthly average consisted of only the last 10 days.”

    Does that mean the equipment didn’t record anything before the last 10 days of the month, or does it mean that it did, but the data for the previous 20 days was bad, or failed to be considered in the monthly average? What exactly does it mean.

    Were they unaware of that problem? If so, why did they release the data before fixing it? Did they really only find out last night that their monthly average was based on just 10 days of data?

    I may be looking at this the wrong way, but maybe someone can enlighten me.

  67. Jerker Andersson (00:09:47) :

    “What is interesting is that it happens within 24h after they release a result that shows a new record low for first 7 months.”

    I agree. That means is a big risk for quality mistakes in the change.

    “If there is an error and they notice it, it must be corrected. In the process it lookes like they found errors on every month at least for the last 4 years.

    They say the runned the adjusted program for all days all years, where it has put in “some data” (could be running mean values) these days when there was none. But a problem, I think, is if they have changed data according to new code which hasn’t going throuth a planned quality validation process of at least a weekor so. That’s not acceptable in any organisation which has a quality reputation to defend.

    What I’ve read about the Mauna Loa record is that there data from the days when the wind is blowing from the volcano is omitted. I hope they have not included any of these data now! A fast change like this by some engineer is always very risky.

    It also doesn’t seem likely that the first July data presented was only the 10 last days of the month, since the change when the first 20 days of data was included is only about 20 percent or so… It should be at least a 50 percent change if the 20 first days of data was actually first missed and includeed later. That’s because the continous change where the lowest values are at the end of the month.

    Another thing: I also find it surpricing that Lucia suggested this July value was a calibration error which gonna be corrected. Isn’t callibration something which is done only once, and that re-callibration creates discontinuation in a record?

    Finally: If Henry´s law is real (and it is) no one shall be surpriced if the CO2 concentration has to rise slower or to stop ries during a cooling phase in the coming years. No matter we yearly emitts about 3 percent of the naturally emitted CO2. (Only about 5 percent of the CO2 in the atmosphere is measured to be from fossile fuel.)

    did they start that process yesterday also, reverifying those 4 years?
    Did they actually manage to recalibrate, make new meassuremnents of all those samples from the last years in one day or is it a mathematical adjustment of the results?

    It looks like that allmost all datapoints that sticks out a bit have been smoothed up AND down in order to create a straight line showing that there are less variability.

    The adjustment does not seem to affect the overall CO2 trendline in any significant way though, just making it smoother.

    So what errors did they find that gave both too high values when the CO2 level was above trendline and too low when it was below trendline?

  68. Not that most of our elected reps are worth a damn anymore, but if you happen to have a congressional contact that is even mildly skeptical of NOAA’s activites, they need to see this. Further, they need to call up the head of NOAA to explain the fuzzy math that at least one of their observatories is using. We get nowhere without some political push-back… I have already sent a letter to my (one) receptive Senator…

  69. Ooops!

    I missed to remove text (by Jerker) in the last 4 paragraphs. From “did they start that process…” all lines shall be removed!

    I’m really sorry.

  70. It sounds to me like they are trying to adjust for measures taken in every July to compensate for rapidly decreasing measures. If they rapidly decrease, any changes from one month of July to the next month of July may not be consistent, since the data is selectively chosen for each day, meaning that there may be days when the criteria for selection (wind speed, direction, duration, etc) were not present. To make the data more consistent from one July to the next, it looks like some kind of statistical equation was coded in with “if that” and “if this” occurs, “do that” to it (code language to look for something in particular in the data and then adjust it). I do know that for rapidly changing and highly variable data, it is hard to compare one month to the next similar month (IE in this case “July’s”) because there is nothing similar about them. I don’t know what to think about the equipment problem.

  71. What I took from what Pieter Tans said was that there were missing days in the record. That causes them to have to calculate a “fill” value. That “fill” value is probably a function of the average over time. When something is done to change that average over time, it changes those fill values and when those fill values change, the monthly average values change.

    If you were looking at daily data instead of monthly data, you would only see certain days jump around with the change. But those few days do impact the average.

    What bothers me most is that errors seem to get found and corrected that make the CO2 level “too low”. But I wonder how many errors there might be that make “too high” and aren’t found because rising CO2 is the expected result. Only when something is lower than it should be does anyone notice anything. How many times in the past were readings higher than they should have been? We will never know.

  72. (I also missed to switch off italic in my first sentence. I’ve got a terrible head ace and feel sick :-( )

  73. And were the missing days in previous months and years always in the second half of the month?
    I have not studied the changes carefully, but it does seem odd if the alterations only seem to go one way (in the main).

  74. It strikes me as peculiar that the only errors that NOAA discovers on its own are errors that, when “corrected”, bring the data in line with CO2 induced global warming. I’m not saying that they are deliberately manipulating the data, I’m saying that it is unusual that the only errors discovered are ones that are at odds with the accepted theory.

    When I perform tests the data that most bothers me is the data that is exactly what I expect. That is the data that I double check. NOAA seems to be operating on a different principle.

    If I were at a roulette table and I saw the casino owner’s wife win nine consecutive times I wouldn’t necessarily conclude that the table is rigged, but just the same I would move my gambling to another casino.

  75. “the earlier monthly average consisted of only the last 10 days.”

    It took them 30 years to figure out that monthly averages should involve division by the number of samples, not the number of days? Where’s their raw data?

  76. If one make a zillion small adjustments that slowly add up to a large adjustment, one can look up innocently and say the adjustments are very small (i.e., classic “fallacy of the parts”).

    Evan, it’s not at all unlike stories about bank employees who have been caught embezzling the rounded fractions of cents from thousands of accounts. Pretty soon you’re talking serious jack.

    Were they unaware of that problem? If so, why did they release the data before fixing it? Did they really only find out last night that their monthly average was based on just 10 days of data?

    Xavier, at the risk of sounding nasty, we ARE talking about government bureaucrats here, PHD or not.

  77. Frank L/ Denmark (07:45:57) :

    “How many of such adjustments happens to favour IPCC/AGW/Alarmists ideas?

    I have NEVER seen a single big adjustment done by the alarmists that does not end up being a support for their opinions. NEVER.”

    I’ve been observing that, too. All of the “adustments” of temperature and CO2 seem to go in one direction only: in support of AGW. All of them. What are the odds, eh?

    The public’s perception of NOAA and GISS credibility is getting close to this: click

  78. Let me see if I understand this. I goofed and missed that my meter was broke for 21 days so I need to take the average over the time of record and use that to fill the days I missed. ( I hope no one notices that I goofed.) Oops that change also changed every record back to 1974 for missed days. ( dang since the average is higher than the observations for beyond 94 back it brought the whole record up some.) HE HE HE the record shows that the CO2 problem is worse than we thought. ( take that skeptics)

    The parts in parentheses is what I think they are thinking.

    Anthony or Charles you may snip if desired. I just hate to be taken as completely ignorant even if my education only goes to an associate level. I can read and understand most of it.

    Bill Derryberry

  79. Due to the lack of disclosure, there is always a chance that the entire adjustment was because the self-described “skeptics” were running the with the July drop and this made people nervous enough to attempt a more accurate value using the method Dr. Tans describes.

    However, because of the degree to which July 2008 is missing the actual data it is useless to draw any conclusions based upon it. We are going to have to wait for more data, something we would have had to do in the first place.

    I have no doubt the AGWers monitor all the skeptic-aligned sites/blogs and would have screamed at NOAA the moment the old July 2008 value hit the skeptic radar screen. I suspect that someone has already told Dr Tans I am a climate criminal skeptic (not of increasing CO2, however).

  80. Echoing several others:

    It would be good if logs were kept of all historical adjustments.

    Site history information probably should include:

    1. Records of routine equipment checks and maintenance
    2. Records of how and when data is taken
    3. Explanation of change
    4. Data adjustment details
    5. Name of scientists involved

  81. Stan: Glad to see you back.

    Phineas Freak (the “smart one” of the Fabulous Furry Freak Brothers), in that immortal masterpiece, The Idiots Abroad, amasses a fortune by hacking into an oil company computer and diverting one cent from each barrel of oil sold. He then greatly compounds the wealth by founding a fake religion to launder his ill-gotten gains.

    There are a few uncomfortable parallels here, actually . . .

  82. Too bad for the self-described “skeptics”

    Nah, that’s not it. It’s a pretty good day for this skeptic. I got to see, almost real-time, an inconvenient trend reversal in CO2 erased. We can’t be having CO2 concentrations following global temps down, now can we?

    I imagine high-fives all over the top of Mauna Loa after this smoooooth moooove.

  83. Xavier, at the risk of sounding nasty, we ARE talking about government bureaucrats here, PHD or not.

    The explanation sounds as vague as can be. The had “…problems with the equipment” which wiped out the first 20 days of data? And they didn’t know this until yesterday?

    And then: “This change also affected the entire
    record because there are missing days here and there.”

    Why would this change “affect the entire record” years into the past? What does this “equipment problem,” which occured “in the first half of July,” have to do with “missing days here and there” in previous years? How did the discovery of this problem, supposedly yesterday, lead them to the discovery of the “missing days here and there” in previous years? If there was no connection between the two problems, did they really discover them coincidentally on the very same day, which coincidentally happened to be the day Anthony noticed the unusual drop in July?

    ???

  84. Denis Hopkins (10:35:15) :

    “…it does seem odd if the alterations only seem to go one way…”

    They must introduce the same change for rapid increases too (if they want to be seen as serious). This fast speed “quality risky” change wasn’t necessary due to their disclaimer, that their data for a month shall be regarded as preliminary data.

  85. The explanation sounds as vague as can be. The had “…problems with the equipment” which wiped out the first 20 days of data? And they didn’t know this until yesterday?

    And then: “This change also affected the entire
    record because there are missing days here and there.”

    HEY! It’s Hawaii… Lighten up you Haoles!

  86. The increase from January to July is still very tiny, if global cooling is real, then the CO2 increase will continue to slow down, like it did in 1945 with the ENSO and PDO switch to negative. Still, the way NOAA adjusted (or “adjusted”) the data in such a quiet way so people would not notice is way too suspicious for me to ignore. I do not think NOAA is an organization that has much credibility anymore, considering their little “climate change” report propaganda, not to mention the very suspicious Smith and Reynolds global temperature measurements which are hard to know existence, and when they are mentioned, it is always in a very bad light.

  87. Just a little heads up … the July data are now in NOAA’s NCDC Climate At A Glance page. The page says June still but the July data are there. It looks like this July was cooler than last year. Last year’s North American July average (according to NOAA’s adjusted USHCN v2 numbers) was 75.56 F and this year is 74.93 for a 0.63 drop from last year. Preceeding 12 month temperatures from 1998 to this year show a negative trend of 0.4 degrees/decade.

  88. So the new data really is not the July data but the average of the data for the past … how many years? Boy oh boy, gotta love that climate science!

  89. It’s perfect! It’s beautiful! It is an excellent example of data modeling. The numbers for July were lower than they expected. So they used a COMPUTER PROGRAM to extrapolate the numbers on the ASSUMPTION that the missing numbers (first half of July) would have been higher. Then they ran the COMPUTER PROGRAM for a few years back to IMPROVE those calculations as well – to APPLY THE MODEL.
    THIS, my friends, is how computer models work. This is an excellent example of how the single most important part of the global warming hysteria – the computer models – change by ASSUMPTION and EXTRAPOLATION.
    (my creds – 28 years in computing including work on several models)

  90. The adjustments go both ways as seen on this plot: http://tinyurl.com/6qb3sg

    The net gain is 0.19 ppmv over the entire 34 year record – this includes the July 2008 adjustment. If we back out the July 2008 adjustment of 0.67 ppmv, the gain becomes a decrease of 0.48 ppmv.

    I really don’t see anyone here diddling with the data-set in order to amplify the AGW aspect.

  91. Beaker: current ENSO view neutral-
    •ENSO-neutral conditions are present in the equatorial Pacific Ocean.

  92. I have been having an ongoing email exchange with Dr Tans. In the last go round, I asked him to confirm my understanding of the nature of the adjustment.

    I wrote:
    “Am I correct that when you changed the program to account for the missing 20 days in July, there was a backward propagation of adjustments filling in for other missing days?”

    Dr Tans replied:

    “You are good.

    When I was at it, I made another adjustment to the program. I used to fit 4 harmonics (sine, cosine with frequencies 1/year through 4/year) to describe the average seasonal cycle. I changed that to 6 harmonics.
    Therefore, there will be small systematic differences as a function of time-of-year in the de-seasonalized trend. That will be on top of adjustments caused by months in the past during which there were a number of missing days not symmetrically distributed during that month.”

    I think we are too conditioned to data getting Hansenized and may be jumping to conclusions. So far, unlike Hansen, Dr Tans has been forthright with communicating his approach.

  93. Pieter Tans has replied to several questioners who asked why the data have been altered. His reply says:

    “The reason was simply that we had a problem with the equipment for the first half of July, with the result that the earlier monthly average consisted of only the last 10 days. Since CO2 always goes down fast during July the monthly average came out low. I have now changed the program to take this effect into account, and adjusting back to the middle of the month using the multi-year average seasonal cycle. This change also affected the entire record because there are missing days here and there.”

    But this explanation is extremely implausible in the light of the observation (above) by Kim Mackey (03:49:35).

    Kim Mackey lists the data adjusted data in chronological order and shows that the adjustments were plus, minus, plus, minus, etc. throughout the series. And – according to Pieter Tans – this pattern of alternate increases and reductions is a result of corrections to days when data is missing as a result of faulty equipment and/or weather.

    But the odds of this pattern happening by chance are the same as tossing a coin 25 times and gaining a sequence of heads-then-tails throughout all 25 tosses. It could happen, but the odds of it happening are comparable to the odds of winning the grand prize in the National Lottery.

    Indeed, the extremely improbable pattern of the changes is reason to reject the (above) suggestion by crosspatch (10:26:49) that the data alteration is insertion of simple “fill” values unless evidence is provided to demonstrate that such values were calculated and used.

    When I smell a rat then I look to see if a rat is present so I can eliminate it.

    We need to see
    (a) the original data,
    (b) the adjusted data, and
    (c) the algorithm used to make the adjustments.

    Until these three sets of information are provided then I think it is naïve to accept the data as being valid.

    Richard S Courtney

  94. I’m skeptic of the explanation you received.

    Of course, it is possible to have problem with equipment. It is possible that partial data would have an effect on average.

    Where I am skeptical is that the adjustment seem to be only made when data don’t correspond to the preconceive notion of the people taking the measurement. Even more many rely on this data set which has no other corroborating evidence. Climate science is really a world leader in data adjustment and creation.

  95. Dee Norris:

    “I think we are too conditioned to data getting Hansenized and may be jumping to conclusions. So far, unlike Hansen, Dr Tans has been forthright with communicating his approach.”

    I agree. Lets not go making mountains out of molehills. The overall impact of the change is trivial and I am satisfied with Dr. Tans’ explanation. We should be applauding his transparent approach to what happened and what he has done and move along. Personally, I am satisfied with the explanation he has given.

    I am still bothered, though, that only data that goes unexpectedly in one direction gets attention but that is more of an overall issue and not something directly aimed at Dr. Tans’ work.

  96. @Richard S Courtney

    I think that your post and my update from Dr Tans missed each other.

    If Dr Tans changed the harmonics that were used to compute the missing days and groom the data, it could very well lead to the adjustment pattern we are seeing on the plot here: http://tinyurl.com/6qb3sg

    without a complete description of his methodology before and after, it is hard to really be certain.

  97. Magnus: Weather data from the Kona airport (FAA) should show wind direction and velocity daily for all 24 hours. Whenever the wind direction from the south (Kona wind), the VOG (outgasing) will corrupt data collection on Mauna Loa. I should not be too difficult to compare NOAA data that is discarded with FAA actual wind data to determine if NOAA is cooking the books rather than accumulating data honestly.

  98. To Dee Norris:

    You accurately point out to me:

    “If Dr Tans changed the harmonics that were used to compute the missing days and groom the data, it could very well lead to the adjustment pattern we are seeing on the plot here: http://tinyurl.com/6qb3sg

    without a complete description of his methodology before and after, it is hard to really be certain.”

    Yes, I completely agree. Indeed, I take your pint to corroborate my assertion that:

    When I smell a rat then I look to see if a rat is present so I can eliminate it.

    We need to see
    (a) the original data,
    (b) the adjusted data, and
    (c) the algorithm used to make the adjustments.

    Until these three sets of information are provided then I think it is naïve to accept the data as being valid.

    Richard

  99. Dr Tans replied:

    “You are good.

    When I was at it, I made another adjustment to the program. I used to fit 4 harmonics (sine, cosine with frequencies 1/year through 4/year) to describe the average seasonal cycle. I changed that to 6 harmonics.
    Therefore, there will be small systematic differences as a function of time-of-year in the de-seasonalized trend. That will be on top of adjustments caused by months in the past during which there were a number of missing days not symmetrically distributed during that month.”

    So, they can just do this? A guy can sit down to fix a problem from three weeks ago that has supposedly just been discovered, and while he is “at it,” go ahead and make another adjustment to the historical record just because he feels like it at the moment?

    Is that how it works? Are these changes logged somewhere? Is the rationale for these changes the product of previous discussions, of which a record is kept, or can it be just a spur of the moment thing that someone happens to think of while doing something else? Can the changes he describes be replicated?

    In short, the July fix, plus the harmonics change, plus the previous “missing days” fix had to be done all together, and they had to be done precisely last night?

  100. @crosspatch

    If my understanding of the July CO2 level changes is correct, the rate of (negative) change increases throughout the month and into August.

    So, if we only average the last 10 days of the month to get the mean, we are injecting a bias into the value cause we will show a greater decrease for the month. Likewise, had the data been for first 10 days, we would see a lessor decrease for the month. Each monthly mean is adjusted to the mid-month according to the data-set description.

    It then becomes a safe bet to say that the old July 2008 data was incorrect, but we don’t know how correct the new data set is. To get an estimation of the accuracy of the adjustment, I would want to see the last 10 days of July for several years past and compare them to 2008. If the fit is good, then adjusting using prior history is reasonable. If the fit is not not good, then we have a problem.

    Any unusual acceleration in CO2 levels for July should also manifest itself in August to some extent. We will just have to wait and see.

  101. The adjustment made is minor as far at the trend is concerned. The issue is not likely malicious intent, so I think people should calm down, (my opinion, not acting as moderator).

    But…there ARE two issues that this raises, noted earlier.

    1. The mindset or groupthink that triggers such scrutiny by those reporting the data. That is easy. If people believe a theory, they will only notice data as aberrant in conflict with that theory. So groupthink or a belief system does have an unconscious influence on actions like this.

    2. The audit trail that should be in place when such adjustments are made. This is the big unresolved issue.

  102. Talk about observational bias writ large. Every other science is acutely aware of observational bias but the climate crowd seems to think they are scientists so it doesn’t apply. Dr. Tan only “noticed” the distortion when it went outside his preconceived notions of what the correct values “should be.” How many bad readings that matched his expectations were never investigated because there was nothing obviously wrong? At the NIH my blood pressure was measured on the same machine that measured my grandparents’ in 1947. A gorgeous walnut cabinet with double blind calibrated mercury pressure gauges. The researcher explained that without the double blind the readers would unconciously bias the readings. I notice the same possibility in the manual entries for my local NOAA manual entries where the daily readings are suspiciously too often the low of the day and the low is rarely lower than recent past lows. It is classical observational bias to resist a new lower low when all the past readings are there to compare.

    Somebody needs to graph the last 10 days of the month unadulterated data going back and see what that looks like.

  103. @Richard S Courtney

    At this point, I don’t smell a rat, I smell a lack of transparency and an audience who is worried that Hansen’s behavior is spreading to other scientists.

  104. He’s just following the Hansen Rule . . all adjustments have to be up.

    So simple, so easy. Keeps the fear factor high and the Grant money flowing.

  105. Beaker:
    Thanks for the info. I mistakenly identified the base index year as 1977, it was 1973-which was an El Nino year.

  106. @Xavier

    I would bet that the change from 4 harmonics to 6 harmonics was planned for a while. That sort of thing you don’t do on the fly. The needful correction of the July 2008 data may have been the trigger to executed it.

    As I understood his email the change in harmonics was done to further reduce the influence of seasonal variation (de-seasonalize) in the C02 record. If this is correct and the change does accomplish that, it is a good thing.

  107. @Paddy

    Since the Mauna Loa trend is pretty much in agreement with the global trend, I doubt they are ‘cooking the books’ with VOG.

  108. the explanation is a good one.

    if you discover an error you MUST find a solution for the WHOLE dataset.

    doing anything else, would NOT work.

    that it has a “dampening” effect on the wiggles sounds pretty logic to me. looks like many of those outliers were actually produced by missing data.

    lesson to learn: don t get too excited over a single data point. even when he fits your believes…

    Reply:Hey! I agree with sod. Time to buy a lottery ticket~charles the moderator

  109. Like Dee, I also don’t smell a rat. I know to those who suspect all adjustments it seems odd that the big negative number caught Dr. Tans eye. But bear in mind, this data point was a major outlier relative to the linear fit I show above. Any experimentalists would notice the big excursion and investigate what might have happened.

    The answer is straight forward enough.

    That other changes were made suggests that this measurement program is a bit informal. But that happens sometimes. The degree of formality associated with measurements varies from program to program, and will often depend on the use of the data. As interested as climatologists and bloggers are in this data, they are not direct inputs to weather prediction, health or safety programs or anything else.

    Remember: No one was physically injured as a result of the August CO2 blog kerfuffle.

  110. It doesn’t seem like Dr. Tans did anything malicious or with nefarious intent. His explanation makes sense.

    (But I’ll bet $1 he wouldnt have gone looking for a new algorithm if they had instead lost data from the 2nd half of July and the chart was spiking erroneously high. Any takers?)

    “My objection goes to bias your honor.”

  111. Ed Ried,

    I think you understand the point I was making. Furthermore duplicate samples tested at an independent laboratory would certainly lend credibiliy to their data set and also avoid having lengthy periods with no data recorded. With regard to the subsequent adjustments to the graph I couldn’t agree with you more. As for why it took twenty days to get their instruments fixed well maybe the pack mule they use to carry the equipment up and down Mauna Loa was lame.

  112. Frank L/ Denmark (07:45:57) :

    I have NEVER seen a single big adjustment done by the alarmists that does not end up being a support for their opinions. NEVER.

    And who believes that is just a coincidence. If the data was really in need of adjustment, you would not be able to tell the positive count from the negative count of adjustments. Nothing in nature is this one sided, and surely not data taken by any monitoring system. The odds are infinite that this is just fudging the facts.

    So where is the raw data, without the adjustments. We are big boys, we can understand the need to try and make the picture clearer, we just demand to know the facts about what is going on. Honest science should be open science.

    Has anyone heard of the Data Quality Act of 2001? It requires the federal government and it’s agencies and contractors to put all the science data in the public domain, unmolested … With no corrections and fudge I might add. Corrections can be applied, but must be documented. I wonder who this qualifies under DQA.

  113. I would agree that the change is for the better during any kind of month that shows rapid variability. I don’t see anything statistically wrong with doing that. It clears the noise so that trends can be seen.

    However, sometimes change in variability is an important statistic. Months that fly all over the graph may show trends in raw data variability that may be much more sensitive to climate change than steady as she goes months.

    I would sure lurve to have the raw data for every moment of measure.

  114. Both the explanation and the correction seems reasonable to me, except for one thing, that weird seesaw where every second month is corrected up and every second down. To me this means either that there is something wrong with the correction algorithm (easy to do when in a hurry), or that for some reason data is predominantly missing at the beginning of one month and at the end of the next. Do they by any chance take their equipment off-line for maintenance at two-month intervals?

  115. Dee:
    Thanks for the leg work and the level-headed interpretation. If you can ask a follow up question of Dr. Tan, perhaps you can ask him (?) what prompted the move form 4 to 6 harmonics that would at least help understand what issues they are trying to handle.
    Thanks

  116. Pingback: Why Climate Change is Unbearably Naked « The Unbearable Nakedness of CLIMATE CHANGE

  117. Given the speed and complexity of the adjustments made to so much data the idea of a log of the changes made and how and why seems at the very least reasonable (prerequisite more like).

    The importance of the Mauna Loa data to everyone makes the fact this is not already the case almost unbelievably “sloppy”….

    As far as I’m aware the FULL Mauna Loa data is not avaliable, maybe, just maybe this episode, if it gets the attention it so rightly deserves, might just make this data set full release become a more realistic proposition.

    Nirvana you might say, or should that be “contaminated with Vostock”…

    Ahh well, I can dream.

    Whatever happens in the longer term, (if the present trends continue) further changes are a racing certainty,
    so I’d suggest this WILL run and run…..

    “Hockey sticks”..

  118. @Lucia

    Commotions such this are often positive. I know the drop spurred me to investigate the rate of change in CO2 levels trying to check the validity of the July 2008 value. In doing so, I stumbled across the signals in the change in the rate of change (acceleration) CO2 level that seem to occur when the PDO is about to shift.

    In my experience, that is how science really works. Someone discovers something and a bunch of people investigate it. During the investigation, new stuff is discovered. Repeat as needed.

    @radar

    I would take your bet. I think we are attributing Hansen-like behavior to a scientist who has given us no cause to do so. It is like there is a need to shoot the messenger cause we don’t like the message.

    CO2 is going up. The Mauna Loa data is agnostic to the source of the CO2. Suppose CO2 does start decreasing and some shadowy figures diddle with the data to show a steady increase. That would be WONDERFUL! Think about it for a moment. Putting aside seasonal or other changing vegetative sinks, temperature leads CO2. So, temperature goes down before a CO2 decrease. Now if they fudge the CO2 data to show it high, but there is clear cooling, what does that do to AGWers’ notion that CO2 is driving Temperature?

    If there is any fudging done, I would watch for changes that show a drop in CO2 preceding a drop in global temperature.

    Or, fudge the GIS to mask any cooling trend while CO2 is still rising.

  119. Minor nit but a pet peeve of mine … “dampening” … it should be “damping”. To dampen is to make something wet. To damp something (like a chimney damper) is to smooth out change.

    “Damping:
    Damping is dissipation of energy in an oscillating system. Limits maximum amplitude at isolator natural frequency.”

  120. I can see how one would lurve a date
    perhaps some hot tomato.
    But Pam has throw me for a spin
    she says she lurves raw data.

  121. RE: “…this measurement program is a bit informal. But that happens sometimes.”

    A rather interesting discussion of climate science went on at CA a couple of months back. A charming, unflappable climate scientist argued that the whole field was much maligned by the martinets demanding rigid adherence to protocols and results. He said something similar, that the science was simply more “informal” than that. What he said resonated, but when I see something like this, with alterations being made ad hoc, it sure gives the impression of a bunch of people, shall we say, hopping around, looking for their lost shakers of salt?

    With respect, Lucia, the argument is a seduction. There are too many policy makers now depending on such data, ready to throw money at something that may or may not be a problem.

  122. I have been waiting patiently for this issue to resolve, and now much of it seems to be on the table.

    At the moment what surprises me is that:

    1) The methodology for processsing the raw data from Muana Loa does not seem to be well known/published. I had always assumed that the output WAS raw data.

    2) The post-processing algorithm to ‘insert missing data’ does not seem to be well known/published either. I would have naively guessed that such missing data must be a common issue in long-term measurements of all kinds, and there would be a standard statistical technique available for dealing with it.

    Having said that, the modifications do seem to bring the data closer to a straight line, so the aim does seem to be smoothing of some kind, rather than the introduction of bias. Whether or not the alteration of the data is valid, only a statistics specialist will be able to pronounce on.

    This seems to be yet another justification for AW to go back to the base data and check that…

  123. “…harmonics (sine, cosine with frequencies 1/year through 4/year) to describe the average seasonal cycle…”

    Is this cycle used to decide what data is out of range and should not be accepted, thus different measurements over 30 years are now being accepted or rejected? Has anyone found other patterns or influences in the raw data?

  124. Isn’t it more likely that the correction is to compensate for the changing wind patterns rather than a calibration of the equipment?

  125. Based on the new seasonal adjusted Mauna Loa data for the first 7 months the yearly growth rate will fall dramatically this year to 1ppm/year.
    This correlate rather well with the idea that the temperature is controlling the rate at which CO2 is added to the atmosphere.
    Based on the last 30 years of satellite temperature anomaly data I estimate this years growrate of CO2 will be 1.2 +/-0.4ppm/year if the temperature stays around 0C anomaly for the rest of the year.

    There is a pretty good chance that this year will have one of the 5-10 lowest growth rates in the last 30 years. At same time we burn coal and oil at a faster rate each year.

    If am convinced that if the global temperature would drop another 0.3-0.5C the CO2 level would stop raising.

  126. BillP–
    I think the point you are trying to make is this particular data collection assignment shouldn’t be informal. Given its importance and wide circulation, that may well be true.

    I’m just observing that everything we are reading suggest some informality exists. That’s not the same as saying the informality is ideal under the circumstances.

  127. @Jerker

    A 1.0 ppm increase would also be the smallest January annual increase since 1997 (0.98 ppm increase). However it would still be within the variation of the annual change. Compare 1989/1990, 1996/1997, 1999/2000, 2004/2005 & 2006/2007 with 2008/Prediction.

    Year CO2 Delta
    1989 352.72 2.49
    1990 353.63 0.91
    1991 354.87 1.24
    1992 356.08 1.21
    1993 356.76 0.68
    1994 358.05 1.29
    1995 359.73 1.68
    1996 361.83 2.1
    1997 362.81 0.98
    1998 365.00 2.19
    1999 367.97 2.97
    2000 369.07 1.1
    2001 370.47 1.4
    2002 372.38 1.91
    2003 374.92 2.54
    2004 377.03 2.11
    2005 378.43 1.4
    2006 381.36 2.93
    2007 382.91 1.55
    2008 385.37 2.46

    Also see: http://www.woodfortrees.org/plot/esrl-co2/from:1960./every:12/derivative

    Swings of this size are not uncommon and appear to have a coherence to ENSO SST data.

  128. IceFree: Based on the origin of the phrases, I think you have to jump on the bandwagon first.

  129. You don’t need conspiracies or malicious intent to explain adjustments favouring one hypothesis. The phenomenon is called confirmation bias and is well known in the social sciences.

    Also, scientists often claim to be engaged in “value-free” dispassionate enquiry, giving their prognostications special status as knowledge. Yet, you can see Dr Tan’s values sticking out like a, er, sore thumb.

    The only solution to value-laded research and confirmation bias is to have competing hypotheses and falsifiable empirical tests. This is my big concern with climate science – the growing number of post-hoc explanations as to why the predictions were wrong. Just what is the predictive ability of the AGW hypothesis? What scientific puzzles has it solved?

    If your theory is immune to data, it is not science.

  130. I agree with Frank L. It’s like a “mistake” in a restaurant bill — 90% of the time the mistake favors the restaurant.

  131. So.. we hear that all these data way back has been changed because of missing days being adjusted fore.

    ok

    And aerm..
    1) it just happends, that the july 2008 for some reason has far bigger change than all other months?

    2) So just when we see this huge decrease of CO2 rise, they had been working on an new adjustment procedure that covered all years back to 1974?

    It must have been a surprice for themm that just when they had laid last hand on a procedure concerning missing days, WUPTI then by chance just came a month with more missing days than ever seen? Thats a coinsidence.

    3) And so the story is also, that well funded Mauna Loa whos only challenge is to drive a CO2 meassurement device… in july 2008 they have been sitting watching their device not working without fixing it for 20 days??
    Or is it realistic that they could not get a new one or fix the problem in 20 days?

    ANYWAY: If Co2 levels are slowing down/decreasing and we are getting a colder wolrd, well, now Mauna Loa Has used the story of missing days, and they cant adjust the whole tendensy away in the coming years.

    And YES it sounds fanatic to even considder if some alarmists are not laying all cards on the table. But who has created this enormous miscredit to alarmist data? They have certainly created this situation of low trust them selves.

  132. I do not see how a conspiracy, but as a stupid move.
    Until yesterday I thought that a large part of the increased CO2 concentration was anthropogenic.
    Today I have doubts. And the only way to eliminate this would know the concentration of oxygen in the atmosphere Mauna Loa (with the same equipment and two days of work, I have the problem). You will get the following result of the increased concentration of CO2 and increase the concentration of O2.
    C + O2 > CO2 ( Imagine O2 up and CO2 up)

    Ecotretas- um abraço em português.
    best site = http://www.surfacestations.org

  133. lucia:

    //That other changes were made suggests that this measurement program is a bit informal. But that happens sometimes. The degree of formality associated with measurements varies from program to program, and will often depend on the use of the data.//

    This ends up being one of the central issues in the whole climate change discussion for me. The problem is that the degree of formality around the data is dependent largely on the originally intended use of the data, which in many cases now differs greatly from the actual use of the data.

    The CO2 data, the historical temperature records, and the majority of the current temperature tracking data are being treated with a level of precision that is simply not supported by the actual data collection methods. And the resulting analysis is driving trillion dollar policy decisions that should not be based on data that was intended for far more pedestrian uses.

    I certainly don’t see anything nefarious about this current data point correction, but it does seem illustrative of the larger problem.

  134. The Australian CSIRO runs an atmosphere monitoring station at Cape Grim, Tasmania, in the path of the ‘roaring forties’. I have searched for results, but they seem to release only a ‘yearly’ figure, and the latest I could find was for 2005! If there was access to their daily measurements then we could compare them with the Mauna Loa results. Maybe this shows that the Mauna Loa people are braver than the Cape Grim ones! Of course, the CSIRO is a powerful advocate of AGW – with recent funding cuts they probably see it as an important income earner.

  135. As usual, these corrections appear when the values seem lower than normal, not when they appear to be higher than normal. Higher than normal is “evidence for AGW”, not an indication that a correction might be needed.

  136. I have a few questions that I’m afraid will not be answered.
    How many times in the past have these adjustment algorithms been run?
    How long has Dr. Pieter Tan been responsible for adjustments?
    Did his predecessor instruct in the necessity of these wholesale adjustments?
    Are there any records of previous adjustments?
    Did Dr. Keeling initiate a protocol that required regular adjustments?
    Would it be worthwhile to use the way way back machine to set up more blink comparators?
    This may be slightly paranoiac, but still would be good to know the truth.
    Thanks all have enjoyed every comment,
    Mike Bryant

  137. Mr IceFree (15:19:29) refers to a New Zealand website. The article carried there is one of many pieces of fine analysis by Prof John Brignell which are collected on his own website. It’s one of my faves and well worth a read:
    http://www.numberwatch.co.uk

    On the topic of this thread, I must again expose my ignorance and naivety for your delectation. It comes as a great surprise to me that two sets of results are not habitutally released, one being the raw data and the other being the data as adjusted/smoothed. I don’t doubt Dr Tans’ sincerity in the adjustments he makes but we all learn by exposing our analyses to scrutiny and inviting others to comment. Others might well learn from what we have done and we might well learn from suggestions made. What I do not understand is why there is no standard practice of releasing raw data as well as adjusted data in all fields where public policy might be affected by the results (subject, where appropriate, to issues of intellectual property rights and commercial sensitivity).

  138. Found an interesting letter at iceagenow

    Dear all,
    FRAUDULENT SCIENCE

    My advice to climate alarmists is that now is an appropriate time to start planning your exit strategy. The whole IPCC/UNFCCC edifice is about to disintegrate.

    I described these events in my recent memos. My position during all these years has been very simple. I could find no evidence of unnatural changes in the officially published hydrometeorological records.

    Not only is there no believable evidence in the data to support climate alarmism, but the evidence refutes the IPCC’s claims and completely undermines its position.

    What is the future of climate alarmism and its associated research? There is none. The globe is cooling, the glaciers are advancing and Bangladesh is not being inundated by rising sea levels. Public interest is falling and the media are becoming more critical. The possibility of nations reaching agreement on meaningful actions to control, let alone reduce, their undesirable emissions is receding by the day. The basic science underlying the IPCC’s position is being eroded away, stone by stone.

    If the alarmists try to follow the adaptation route, they will be squashed underfoot by civil engineers and applied hydrologists.

    There is only one remaining option. Abandon ship.

    Kind regards,
    Will

    Will Alexander is Professor Emeritus in the Department of Civil and Biosystems Engineering, University of Pretoria, South Africa; Honorary Fellow, South African Institution of Civil Engineering; and was a member of the UN’s Scientific and Technical Committee on Natural Disasters from 1994 to 2000.

  139. @Frank L

    Suppose the equipment failed on July 20th and the failure caused the loss of the monthly data to date the issue becomes one of sloppy data safeguarding.\

    What people don’t seem to be getting is that the original July 2008 data was incorrect because of this loss of data. The mean of the last 10 days of July cannot be representative of the mid-month mean which is the standard for all the other months. This is why it stood out like a sore thumb. It was wrong and had to be corrected.

    The only real discussion is if the method used for the correction is represents a valid reconstruction/adjustment of the missing data. The exact method he used is still rather vague, so until there is a clear understanding of the process it is pointless to speculate bias or motives.

    When the global July CO2 finally comes out, we will get a better picture of the accuracy of adjustment of the Mauna Loa July CO2 mean.

  140. I was employed by a chemical plant for 20 years. If we had pulled the stuff that I see posted on this site regarding testing, we would have been sent to prison by the EPA. Correct me if i am wrong, but are not world decisons bases on this CO2 data? Gee Wizz, I have zero faith in the temp data, chemical anazlysis and every other graph that is out there.

  141. (Must have missed closing some links properly. Is there a way to preview before posting?)

    REPLY: Sorry there is no preview… the code was so messed up I gave up trying to edit, try posting again – Anthony

    REPLY 2:You can just put in the links without trying to put in any tags or titles. That is the safest if not coolest thing to do.~charles the moderator.

  142. Now we know why the earth is not warming as much as the models predict. Could it be the effect of water vapor and clouds? Negative feedback? The latest program on PBS, Nova series was just aired. The main subject was “solar dimming”. The link to the transcript: http://www.pbs.org/wgbh/nova/transcripts/3310_sun.html
    But, the real subject was AGW and James Hansen was the major interviewee. (What a surprise!)

  143. Hat tip to Fat Bigot for the link [from the “Laws” page]:

    Langmuir’s Laws of bad science

    1 .The maximum effect that is observed is produced by a causative agent of barely detectable intensity, and the magnitude of the effect is substantially independent of the intensity of the cause.

    2. The effect is of a magnitude that remains close to the limit of detectability, or many measurements are necessary because of the low level of significance of the results.

    3. There are claims of great accuracy.

    4. Fantastic theories contrary to experience are suggested.

    5. Criticisms are met by ad hoc excuses thought up on the spur of the moment.

    6. The ratio of supporters to critics rises to somewhere near 50% and then falls gradually to zero.

    It appears that we are currently transitioning from #4 to #5.

    [more on Langmuir here. More laws here.]

  144. Lucia,

    It would be interesting to know how many readings required adjustment because they didn’t fall within the bound of what is expected. If these are commons maybe the adjustment aren’t required after all.

  145. Radar:
    I would think an erroneous upspike would no doubt be pulled into line as readily as a down-spike. Otherwise the CO2 measurement might appear sensitive to other local factors such as the volcano. It is also possible that the first 21 days of data were discarded because they did not fit the criteria for what would normally constitute fair data. Is that possible rather than the equipment being the cause of the missing data?
    Why does the line take such a pristeen shape? It looks wrong from the start, if indeed the change is supposed to be influenced by human activities. Sorry if these remarks are old hat for some but I would like to understand this a little better.

    One more question, Why is it a good thing if the adjustment renders an even smoother line? It’s hardly a stretch to see the trend before the smoothing. Has this to do with masking the seasonal variation? Surely that is interesting in itself since it is supposed to originate from the growing v dormant season. Why should one assume as remarked above that an erroneous spike or trough is simply instrument or measurement error? I would very much like an answer from anyone who has the time, thanks.

  146. I’m having a problem with those suggesting Dr. Tan’s forthrightness in providing an explanation should settle the argument in their favor.

    Let’s look at the overall picture:

    1. NASA and it’s affiliates have consistently presented “challenging” numbers
    2. Hansen and his GISS refuse to provide any information including raw data, how and why they manipulate data, or any magical formula used in manufacturing his numbers
    3. The SST numbers have mysteriously disappeared from the media’s discussion on AGW
    4. Atmospheric numbers not coming out as expected are ruled as invalid or of such little value to be of no consequence
    5 Just the other day, one of those affiliates presented the rough draft of an article on predicted climate related disasters using an obviously doctored photo of a house partially under water.
    6. And now this fellow Dr. Tan is supposed to be our Knight in Shining Armor and provide a valid explanation for the Mauna Loa number manipulations?

    I think not. Given the obvious corruption in NASA, there is NO WAY they would allow one scientist to bring a halt to there lying, deceiving methods and ways. He would be immediately ostracized and banned from all meaningful operations.

    Jack Koenig, Editor
    The Mysterious Climate Project
    http://www.climateclinic.com

  147. Even if Dr. Tans’ adjustment is reasonable and defensible on its face, it is still a bit troubling. Would a similar adjustment have been made if the CO2 numbers were higher than expected? Somehow I doubt it. And if not, it has the potential to introduce a bias into the numbers.

    One of the cardinal rules of statistical analysis is that you choose your criteria BEFORE you see the data. Otherwise it’s very easy to fool yourself (and others) into thinking your results are significant.

    By analogy, Dr. Tans should have carefully chosen an averaging method IN ADVANCE and then stuck with it.

    In climate science, this principle seems to have been thrown out the window.

  148. Thanks all. Hope this one works better…

    A few months ago, I started plotting the change in both CO2 and RSS/UAH lower troposphere anomalies for the same months in adjacent years (sorry if this is old hat, but I haven’t seen this particular comparison done before). CO2 is constantly increasing, but by different amounts from one year to the next for the same month.

    There seemed at first to be something there, so I tried a simple 13-month rolling average of these changes to smooth it out.

    It’s not a strong predictor of amplitude nor duration of the cycles, but one thing does seem to jump out. The trend line of temperature consistently switches direction anywhere from 6 to 18 months PRIOR to a corresponding switch in the direction of CO2 change (i.e. temperature leads CO2).

    Based on this, I wouldn’t expect the amount of CO2 change in future months to start increasing consistently, until some time after temperatures do the same. I was expecting July’s value to have decreased over last year’s July value, but the August 3rd reporting was quite a surprise. The next day’s reporting was more in line with what I had expected (1.25), which is still the smallest increase across the same month in adjacent years since 2001.

    I’m anxious to see the July values for RSS and UAH.

  149. No one has taken a shot at these questions yet.
    How many times in the past have these adjustment algorithms been run?
    How long has Dr. Pieter Tan been responsible for adjustments?
    Did his predecessor instruct in the necessity of these wholesale adjustments?
    Are there any records of previous adjustments?
    Did Dr. Keeling initiate a protocol that required regular adjustments?
    Would it be worthwhile to use the way way back machine to set up more blink comparators?
    This may be slightly paranoiac, but still would be good to know the truth.
    Thanks all have enjoyed every comment,
    Mike Bryant

  150. WOW … Dee don’t be suckered by a nice scientist.

    Scientists should publish the raw DATA. We don’t need the good Dr. to make an interpretive dance of DATA. I had assumed we were looking at RAW data, but now we know we are not. Please ask him to publish the raw data with the missing days. In my industry and most industries we do an 8D report when a mistake happens – no one is allowed to simply make a change in the dead of night and expect all others to accept it.

    8D Process
    1. Someone OTHER than the primary person responsible should form the correction team. (NOT DONE)
    2. The problem must be completely described – Why was the equipment down, how long had the equipment been broken, why was the equipment broke, has their brokeness occurred before, has the brokeness contaminated other data, what is the back up system. (NOT DONE)
    3. How is the problem going to be contained – Publish the options for eliminating the problem BEFORE implimenting the change. (NOT DONE)
    4. Indentify root cause – Ask why the problem happened, then why your reason happen, again, and again, until their is not other reason — the last answer is probably the root cause. (NOT DONE)
    5. Ensure that your solution is standardize such that the problem will not occur again. (NOT DONE)
    6. Impliment change (DONE) and verify the change has not created other problems (DONE, in that the problem was AGW needed to be supported)
    7. Impliment preventive measures (NOT DONE)
    8. Credit the correction team. (NOT DONE)

    So, the event was political, not science. Don’t be taken in Dee.

    REPLY: I’m working on a post mortem, please hold for that. -Anthony

  151. Dee,

    Just before the correction was made, I suspected that the Mauna Loa Data was too clean.

    Could you ask the good Dr.

    “What is the fixed standard in CO2 measurement?
    What is the material of the CO2 sensor?
    How often is the CO2 sensor calibrated against the standard?
    What is the drift coefficient of the CO2 sensor?
    What is the linearity of the CO2 sensor with electronics?
    What is the drift of the linearity of the CO2 sensor with electronics – is this calibrated as well?
    How old is the electronics used to measurement the CO2 sensor?
    How often is the electronics calibrated against known electronic standards?
    How much random noise is in their measurement?
    How specific is the sensor to only CO2?
    Does the sensor react with any other gas?
    Is the sensor calibrated over temperature?
    Is the data adjusted for temperature?
    What is the sensor accuracy vs. temperature?
    How long does the sensor temperature soak before measurement?
    Is the sensor calibrated over atmospheric pressure?
    What is the sensor accuracy vs. atmospheric pressure?”

    And what specifically broke on the instrument, why down for 20 days, etc.

  152. McGrats (19:05:26) agreed. The credibility factor has been driven well below 1.0 by the shenanigans of Hansen et al. Un-announced, un-discussed adjustment of graphs is not the way to get it back. It’s like they’ve made their peace with “shoddy”.

  153. Time for a Press Release

    “Dr. Pieter Tans of NOAA caught altering Manua Loa CO2 data by AGW skeptics”

    Altering data with no peer review, public comment, no before and after, no root cause identification, no plan to prevent this problem again, changing when the data is against your bias is very very wrong and Dee and Anthony are wrong to give this guy a pass. In life if a guy like Dr. Tans is comfortable enough to make a change like this in 24 hours, any detective will tell you it was not his first time.

    [snip – potentially libelous label]

  154. Wel, this has triggered a lot of comment!

    I have followed the Mauna Loa data for a long while now and had a lot of discussion about their reliability with Richard and Ernst (Beck). Some background may be of interest here:

    Air intake at Mauna Loa is continuous, with 4 samplings and 2 calibrations per hour. The average of the 4 samplings is noted as hourly average and these are available as raw data at:
    ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/
    up to 2006.

    If there is a large variation between the 4 samples, the one-hour average is flagged (NOT discarded or adjusted). If there are large variations between hourly averages, these are flagged too. The same for abnormal high readings, due to volcanic outgassing, and more frequently abnormal low readings due to upwind conditions (which brings CO2-depleted air from vegetation in the valleys upwards the mountain). Except for the large variations within an hour, the other flagged data are not used for daily and monthly averages. Daily averages need several hours of continuous reliable measurements, monthly values at least several days to be valid.
    The explanation of the flags used are here:
    ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/README_insitu_co2.html

    Daily, monthly and yearly averages are always smoothed, where clear outliers are omitted. That does not influence the overall year-by-year trend more than a tenth of a ppmv if you should include or exclude the outliers, only the variability around the trend is less.

    Thus in my opinion, Pieter Tans was right to adjust the July data and the rest of the data back to the origin. These are no adjustments of the raw data, this only smoothes the presentation of the curve and doesn’t change the trend, which is now about +60 ppmv over the past 50 years. If you compare the Mauna Loa data with the SH stations like the South Pole, you see exactly the same trend, but with some delay and far less seasonal variability (2 ppmv vs. 6-10 ppmv for MLO).

    That the positive trend of CO2 is human made is quite clear, only the year-by-year variation around the trend is influenced mainly by temperature variations and seasonal by temperature and mainly vegetation. I have made a web page which explains that (including comments on Mauna Loa) in full:
    http://www.ferdinand-engelbeen.be/klimaat/co2_measurements.html

    The year-by-year trend of CO2 can be formulated as follows:
    dCO2 = 0.55 x CO2emissions + 3 x dT

    The short-term influence of (ocean) temperatures on CO2 levels is about 3 ppmv/°C, the long-term influence (over an ice age – interglacial transition) about 10 ppmv/°C and about 55% of the emissions (as mass) stay in the atmosphere.

    Pieter Tans extended the above formula further by including precipitation, which influences vegetation growth and thus CO2 levels…

    There was only one major adjustment of the raw data when was discovered that the calibration gas used at that time (CO2 in nitrogen) did give a bias in the NIR measurements, compared to CO2 in air. All calibration gases used in some 500+ CO2 monitoring sites are calibrated themselves against a central standard.

    Thus all together: there is no conspiracy, there may be some bias (but I am pretty sure, that if there was a similar sudden increase of CO2, that Pieter Tans should have adjusted it in the same way), the raw data are available (with some delay…), thus one can calculate the trends with any smoothing algorithm one likes, as that is completely unimportant for the general trend.

  155. “The reason was simply that we had a problem with the equipment for the
    first half of July, with the result that the earlier monthly average
    consisted of only the last 10 days. Since CO2 always goes down fast
    during July the monthly average came out low. I have now changed the
    program to take this effect into account, and adjusting back to the
    middle of the month using the multi-year average seasonal cycle. This
    change also affected the entire record because there are missing days
    here and there. The other adjustments were minor, typically less than
    0.1 ppm. Too bad for the self-described “skeptics”.”

    Am I wrong or does this sound like a self-fullfilling model instead of representation of a data set? Have missing days? Just smooth them all out, since CO2 is “well mixed”. Have data that doesn’t match the algorythm patterned to match expected seasonal swings? Just adjust data.
    Missing days…why not just take a measurement a few times a year, and plot the increase as a straight line that is the expectation of the rate of increase. An algorythm could be made to model that. Then if the data from the few measurements don’t match the expected rate increase, just fudge that data.

  156. niteowl, interesting graphs. If you use a wider line type, separate the graphs vertically and put it in PowerPoint, you may be able to sell it to Algore.

  157. Personally, I would just like to know how many days the data has been thrown out based on the parameters or malfunctioning equipment. After a certain point you are not logging enough data to be meaningful.

    As far as adjusting the current July using Deltas from previous years, it SOUNDS just ducky. There is a slight issue though. The previous years did NOT have a sun as quiet, ocean temps and air temps flat for as long or dropping… In other words, it may be the best they can do for their data analysis, BUT, it guarantees the data will look more like the past rather than whatever the heck is happening now!!!!

    What is the point of monitoring something if you end up trying to make the data fit into your preconceptions??? Their quality control sounds more like they are guaranteeing their expectations!!

  158. Ferdinand Engelbeen,

    Is the raw data that is rejected based on the QC, as opposed to obvious equipment malfunction, available??

  159. Ferdinand,

    they need 6 hours of data within their guidelines before it is acceptable. We now know they need more than 10 days data to compute a month without infilling AND ARE INFILLING 20 DAYS DATA!!!!!! THIS IS JUNK!!!!!

    Apparently you are wrong about the data not being changed as it HAS. See the previous posts by others where the data was downloaded.

    Another technical issue is that the diurnal variation in CO2 is fairly large also. If the data they get is not from the same time of day due to QC rejections they are massaging the daily data to get normalised readings also. If they are actually rejecting the data because they can not get it during the right time period I can see a high rejection rate!!

    Now, be serious. Why was this change necessary. This process has been ongoing for over 50 YEARS!!!!!!! They just realised that they did not have the correct procedure to deal with this situation?? This was the FIRST TIME IN OVER 50 YEARS THIS HAPPENED?!?!?!?!?!?

    Are you seriously telling us that this has never happened before and they had to scramble to fix it?? For at least a WEEK no one paid any attention to the measurements and didn’t notice a thing until the incorrect results were posted?? How did they know they were incorrect?? When did someone actually look at them?? They rejected 2/3 of the data for the month and they didn’t realize they had an issue until someone e-mailed them about the large drop?!?!?!?!

    At the least you have a group that is underperforming on a highly paid Hawaiian vacation and should be fired for their negligence!!! It is more likely that their is intentional fraud. I simply can’t believe they are THAT INCOMPETENT!!!

    If I sound incoherent I AM!!!

  160. Ferdinand Engelbeen:

    “That the positive trend of CO2 is human made is quite clear”

    That is the part that I believe is pulled out of thin air. Just exactly what makes it clear that the positive trend is human caused? It seems to me that you jump to the conclusion that since there is a positive trend, it must be human caused. I disagree. If it were human caused, I would expect to see the slope of the increase change with the slope of changes in global CO2 emissions but it doesn’t. The slope seems rather steady since 1960 while human generated CO2 was estimated to be 700% greater in 2001 than it was in 1950. Annual human emissions in 1970 were about half what they are now. But the rate of increase in atmospheric CO2 has been relatively unchanged. That tells me that the amount of human generated CO2 must be a negligible amount of the total CO2 being added to the atmosphere each year. The bulk of it must be coming from other sources. The amount increase in the amount generated by humans doesn’t even show up in the data.

  161. These are no adjustments of the raw data, this only smoothes the presentation of the curve

    I don’t understand this comment. By this standard, there is never an adjustment to any raw data, no?

    and doesn’t change the trend, which is now about +60 ppmv over the past 50 years. If you compare the Mauna Loa data with the SH stations like the South Pole, you see exactly the same trend, but with some delay and far less seasonal variability (2 ppmv vs. 6-10 ppmv for MLO).

    It seems to me that this is essentially the same argument which has been used to defend the hockey stick. “Ok, so Mann cut a few corners but his results were essentially correct.”

    In this case, it’s very likely that the results are essentially correct but the incident is still troubling.

  162. So if the data from the last 30 years has now been changed, how does that affect the climate models?

    These climate models have [i]accurately[/i] mapped the correlation between CO2 and temperature. Apparently the [i]science is settled[/i].

    Surely now those models are inaccurate, so the settled science of the calculations on CO2’s effect on temperature are wrong and will have to be readjusted despite the science being settled?

  163. Niteowl,

    Amazing graphs! If I’m interpreting them correctly it looks like very clear evidence that CO2 change is an effect of temperature change and not the other way around. Good stuff, and I hope they don’t get overlooked down here at the tail end of the Mauna Loa discussion, they deserve more scrutiny.

  164. niteowl, yr Aug 05th, 19:49

    yeah, did same some months ago,
    agree with yr conclusions.

    By the way, there is a way for a 12 months average:

    Tavrg = (T-6/2 + T-5 + T-4 + … + T-1 + T0 + T1 + … + T5 + T6/2)/12

  165. @Dee Noris
    you write:
    “Suppose the equipment failed on July 20th and the failure caused the loss of the monthly data to date the issue becomes one of sloppy data safeguarding.”

    Yes it is a possibility.
    So the failure made the machine delete all previous july data and thus we did not have a Mauno Loa “doing nothing” in 20 days of no data. It was only a sinlge incidence the 20´th july that erased 20days of july data.

    Lucky that the machine did not erase june data? Or all other data?
    That the machine should exactly erase july data seems a little hypothetical to me.

    But anyway, i work at NovoNordic, medicine. i imagine if we had a machine that could possibly either
    1) break down without anyone able to fix it faster than 20 days..
    or
    2) a machine that when breaking down erases all data in present month AND having no backup. Even my personal PC has a backup system..

    In either case if this where NovoNordic we would have been closed down by the FDA in no time.

    And yes I find it strange that an obvious problem of missing data has never been corrected for before alle the way back to 1974?

    These things are not impossible, and to me it means a lot that Anthony Watts impression is that “they´re ok”.

    But of course one cannot neclect, that this is one of 1000 corrections done by Alarmists that much much too often ends up supporting their views. Its just not reliable. Everytime theres “something with the equipoment” it ends up with a change who supports alarmists. Its simply not looking good even though we want to be understanding sensible and tollerant.

  166. Stef (01:11:25) :

    “So if the data from the last 30 years has now been changed, how does that affect the climate models?”

    I doubt this change will affect the models in any way since the adjustment does not change the growth rate of the MLO CO2.

    It will also only have a minor impact on the yearly CO2 growth rate.

    In my comment in the beginning I said it looked like the values have been smoothed towards a straight line. I decided to check if that was true and made som calculations of the CO2 seasonal adjusted data to see if it was smothed or not.

    I did like this:
    A linear trendline was made for each year, Jan – Dec.
    Then I compared if the new values where adjusted closer to or further away from this trendline compared to the old values.

    When I put all those adjustments in a diagram it looked like there was an even distribution of adjustments closer and further away from the trendline.

    For the last year it looks like, if you watch the blink gif in this thread, that the values have been smoothed and that is also true. But it is just a short cluster of points over a few years that where adjusted closer to the trendline. Overall I could not see a significant bias toward a smoothing of the values.

  167. I am still trying to understand why the original data set was published if the problems with the data collection were known.

    Is it common practice to publish information when there have been problems with equipment then simply change it later?

    Why publish findings in this way – which are inconvenient to those who claim we are forcing up CO2 levels – then revise them to a more convenient narrative one day later?

    And some people wonder why I have doubts about what we are told about CO2 and AGW…

  168. I can understand the July adjustment, but the smoothing trend line back to 1974?
    I believe the smoothing has nothing to do with the past, but may have a impact on the future readings. ‘As in cooler oceans not having much of an impact on the CO2 uptake..’
    Maybe they have some insight to the satellite CO2 data of “will mixed”.

  169. niteowl,

    Wonderful graphics and nice reasoning.

    What software do you use to generate those graphs?

    Jack

  170. Raw Data:

    I found the raw data until Dec 2006. Why it stopped in 2006, I don’t know nor am I inclined to see something nefarious at this point.

    ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/mlo/
    ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/README_insitu_co2.html

    @McGrats:

    Dr. Tans is with NOAA, not NASA. I know that both are N-words (but not THE N-WORD) which strongly incite anger and frustration in some segments of the population, but neither organization is a monolithic block.

    I have many friends at NASA who cringe at Hansen’s name and wish he would pack his desk and be gone forthwith. Unfortunately, Hansen built himself a small empire and would be near impossible to dislodge at this point. Witness the reaction to the White House restrictions his pronouncements to sound science. The Hansen empire will have to fall from within (and fall it will eventually).

    @Mike Bryant:

    I saw your questions but have not had the time to address them with Dr Tans, if I do at all. If I (or others who may be in contact with Dr. Tans) do get answers to them, rest assured the answers will get posted.

    @John McDonald:

    Having obtained ripe old age of mid-forties without any major mishap, I seldom get sucked by anyone unwillingly.

    See my reference to raw data above. I am perplexed by the lack of follow through for the 2007 data and if possible, I will inquire about it. If you read the HTML file, I think that some of your answers will be there. Some of your answers are already known, but out of courtesy, I am waiting on Anthony’s update post (this is his blog after all, no?).

    I am a great advocate of transparency in science and the apparent lack is bothersome to me (which is why I am taking the time to investigate this) but I hesitate to jump to conclusions as to the underpinnings of the translucence we are experiencing.

    @Ferdinand Engelbeen:

    Thank you for filling in some of the missing details on the CO2 collection process at Mauna Loa. Other than the cause of the increase, I believe we are mostly in agreement.

    The total human liberation of C from natural sequestration minus the amount of C re-sequestered from the atmosphere does not equal to the change in atmospheric C.

    I have a graph someplace that shows this and will try to dig it out later.

    @Glenn:

    The rate of decrease at the end of July is greater than the rate of decrease at the start of July, so using only the last 10 days as the mid-month mean would generate an incorrect value that is lower then the actual mean would have been. Clearly the July value was incorrect and good science required an attempt to correct it. The approach taken by Tans is not inappropriate in this situation.

    It is unfortunate that it was the first half of the month that was lost and not the second. Had that occurred, the July data would have been erroneously skewed higher and we would be cheering NOAA for the correction with the AGWers would be having this discussion!

    @Stef:

    Not by anything worth all this fuss.

    Other than the July correction, the historical adjustments go both ways and the net adjustment (excluding the July correction) is -0.48. So, Dr. Tans’ change LOWERED the historical CO2 Trend.

    @Frank:

    I would imagine that by July 20th, the June data had already been removed and processed. This would explain why only the July data had been lost.

  171. It’s likely a statistical artifact in the calculation method derived from missing data in calculating the moving average. Anyway the trend is still the same, and that’s the thing that’s most important when assessing global climate change. Global climate change assessments are generated on OVERALL TRENDS in datasets, not absolute change between individual measurements. Datasets are being refined almost all the time, that’s what research is all about..improving, refining, expanding. NOAA know what they’re doing, they’re all highly trained professionals. It was in fact diligent of them to correct for the errors in their dataset, rather than just leave the errors in there.

  172. Hello, I’m slightly off topic here but I’m seeking guidance from you good people who’ve been looking at this longer than I.

    I need some help please in understanding more about several areas regarding Anthropogenic CO2 vs global warming… the very basics I’ve got.

    I need to learn a lot more about c12 and c13. This is often cited as a means of detecting the anthropogenic portion of the co2 part of the greenhouse effect, but I know almost nothing about this and haven’t found anything useful by ‘googleing’ it (I’ve no doubt it will be a word soon!). Can you give me some direction please?

    I’m also very interested in anything you can direct me to regarding oceanic outgassing and the longer oceanic cycles.

    Lastly, and closer to this particular thread, I hope you able to help me with this, which I can’t quite get straight.

    I’ve just graphed the Scripps Mauna Loa CO2 and Alaska CO2 together. What I see is that the two go in lock step together seasonally but about a month apart. Loosely speaking end of summer gives a CO2 low, and end of winter gives a co2 high. That’s not unexpected. The bit I don‘t understand is why the co2 low in Alaska always precedes the co2 low in Mauna Loa. If the ‘leaf on, leaf off’ argument were true wouldn’t I see the opposite? The tropics should be sinking the CO2 in the summer and this deficit should show up in the more polar regions later? The opposite appears to be the case. Mauna Loa always follows Alaska by about a month and also the annual range for Alaska is much larger than Mauna Loa, which I don’t understand. I’m sure I’ve missed something here that you’re all very aware of and hope for a bit of guidance.

  173. REPLY: I’m working on a post mortem, please hold for that. -Anthony

    If Dr. Tan wants to get back in my good graces he could publish the dataset(s) and issue a paper concerning intramonth/seasonal variability.

  174. My word, John! What a journey you have made! (Clearly, being a scientist yourself, your scientific sensibilities have been offended.)

    I don’t know if the infraction here is sufficient to tip the argument, but it is symptomatic of the somewhat loose attitude towards proper procedure.

    Actually I am one of those skeptics who actually believes (with reason) that the 3% of CO2 emitted by man is causing an “overflow” which results in a 0.4% increase in the atmospheric sink. And I know that sometimes data must needs be adjusted.

    But it’s like a patient’s chart–if you alter it you need to say so beforehand, explain what you’re doing, and archive the old records just in case. There’s a procedural ethic involved. If a doctor doesn’t do that, his liability insurance will suffer accordingly. (Plus various other professional ramifications.)

    Climatology does not have the immediacy of the medical profession. Or it didn’t used to, anyway. And I do not regard Dr. Tan as a particularly bad example–at least he answered, which is more than you can say of many.

    But, OTOH, since what is at issue is whether or not to put the planet on a drastic course of socioeconomic chemotherapy (which will directly result in significant morbidity), it would behoove those involved in the diagnosis to begin acting according with a tighter procedure than they have in the past.

  175. KuhnKat,

    The raw hourly averages are available up to 2006 at:
    ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/
    where MLO is Mauna Loa, SPO the South Pole, BRW is Barrow (North Alaska) and SMO is Samoa (island in the SH Pacific).
    All raw data, except for instrument malfunction, are available. With instrument malfunction, of course the data are blank values. Extreme values, or values with known local influence are flagged and not used for daily and monthly averages. I have looked at the 2004 raw data: over 10% were missing due to instrument malfunction (including several weeks in June), over 10% were rejected due to upslope winds. Instrument malfunction can need extensive repair and/or replacement of equipment, which may need several weeks for inflight of equipment and calibration. Despite the high rate of rejections, there was (for 2004) no difference in trend by including or excluding the flagged data…

    The problem they encountered was obvious now and specific for the NH, because July is typical a month with a sharp reduction in CO2. If they had this in October, it would have been the other way out. Something similar (a 20 days malfunction, 10 days good data at the end of the month) at the South Pole wouldn’t have had much impact, simply because there is far less seasonal variation in the SH (and seasonal opposite to the NH). I suppose that the 2007 raw data will be available soon, but for the 2008 data you will have to wait another year…

    Thus we are talking now about a change in smoothing algorithm, which does affect the seasonal variation around a trend, but doesn’t influence the trend itself at all. If they had the July monthly average as first calculated, this would only represent the last 10 days, which are certainly lower than the “normal” average for July, even for the cold July 2008, and next month with full data would recover back to (near) normal… I don’t know if this type of problem ever occured before. It is possible not, or this would have been solved long ago.

    Thus all together, a lot of fuss for nothing. The smoothing is not necessary at all, but gives a nicer graph. The only interest of the real variation around the trend may be for people interested in the seasonal uptake/release of CO2 by vegetation, which has changed in recent decades.

  176. Ferdinand:
    Thank you for the explanation which confirmed my assumption re data erroneously spiking either way. I only hope that those who decide what is erroneous have not missed something important. Hence the reason why it would be good to see a graph including all the days, even the ones when the volcano does it’s biggest burps.

    I am still in the dark as to why the trend line shows a clear anthropogenic signal. I do sincerely want to understand this so would be grateful for a dummy’s guide as to why the anthropogenic signal is clear? Surely the line, if it is to be taken as an accurate depiction of CO2 levels, shows an increase that could very well be of some larger source as the rate of rise is so steadily above the seasonal signal. Why can we not see the contribution from the outgassing? why is that signal removed?

    My own hunch that I would not expect you to comment on, would be regarding the change in the graph is that it probably wasn’t the equipment but simply that the data was not considered fair and that this July was a particularly poor crop. Rather than attract attention, the equipment was blamed.
    Thank you again for the previous explaination.

  177. Crosspatch,

    There are a lot of arguments which point to humans as the source of increasing CO2 in the atmosphere (and the upper ocean layer), see:
    http://www.ferdinand-engelbeen.be/klimaat/co2_measurements.html
    The main points are:
    – the mass balance: more CO2 is emitted by humans than is accumulating in the atmosphere. That means that natural sinks (over a year) are larger than natural sources. The net amount (as mass, not as individual molecules) accumulated from nature thus is zero.
    – temperature in/decrease has a limited influence on CO2 levels (max. 10 ppmv/°C), can’t be the cause of 60/100 ppmv increase of CO2 in 50/150 years.
    – the d13C level of the atmosphere and the upper oceans declined since the start of the industrial revolution. Fossil fuels and vegetation decay are the only sources of low d13C.
    – oxygen levels decline in line with fossil fuel burning, but with a small deficit. This is caused by increased vegetation growth (which produces oxygen). That means that vegetation growth exceeds vegetation decay (by about 2 GtC/yr), thus vegetation is a source of d13C, not a sink.
    – there is an extremely good correlation between accumulated emissions and increase in the atmosphere for the full period of Mauna Loa data and even a very good one for the previous period (1900-1960), where the accumulation in the atmosphere is about 55% of the emissions. This points to a simple first order process of absorption of extra CO2. The correlation between temperature and CO2 accumulation is short-lived (better for dCO2 over months) and far worse.

  178. The most concerning part about this sordid affair is the lack of a quality process.

    A quality process is NOT a single researcher changing the presentation of data, discovering “broken equipment” data one day after the data set is shown to not support AGW. Every major corporation (medical, engineering, industrial) worldwide understands the ISO9000 process and how to make engineering changes (PCNs). It would be nice to see Manua Loa at least make a small attempt to follow a quality process. Dr. Tans data is being used to support a massive attempt to regulate our lives and potentially cost us all a huge drop in our standard of living – so his data manipulation with no peer review, within 24 hours, no documentation, no impact study, etc. tells me they are NOT a serious science organization. As one poster said, after 50 years of doing it one way, within 24 hours they discover the need to change in 24 hours LOL. What was the rush, ask him that Dee!

    While some of you may say “it is not a big change, did not change the slope”, this change goes to the credibility of that location, the whole data set. I, for example, believed the Mauna Loa data was raw and was unassailable only 48 hours ago. I never challenged it in any post, any discussion, etc. Now, I wonder.

    Now we know, they have problems with their data despite “back up systems”, that the nearby volcano causes issues, that data outside of certain ranges is flagged and not included. Now we know why their data does not conform to the initial AIRS data, Now we know why Mauna Loa data looks way to smooth. That’s the inconvenient truth.

  179. Would a smoothing algorithm affect the type of changes that niteowl is tracking in his graphs?
    Mike Bryant

  180. Joy,

    On my explanation page I have the two graphs of raw data for 2004: with and without outliers. Most outliers were to the low side (due to upslope winds, some 4 ppmv below average). That was a year with few days with volcanic outgassing. 1984 e.g. was a year with many days of high outliers due to volcanic outgassing (peaks of +20 ppmv). Both outliers are not used for daily, monthly and yearly averages, but still are available in the raw hourly averages data file.

    I suppose that they simply have an automatic update of the monthly data, with as only algorithm that there must be at least 10 days of valid data. What they probably never had encountered is that if these 10 days were all at the end (or the beginning) of a month, this might give a bias in months with a huge seasonal trend…

  181. Perhaps I’m naive, but they could have avoided this whole kerfluffle by simply posting an explanatory statement along with the altered graph. Any looming questions would then have been answered up front.

    Oh well, live and learn….

  182. Dave H,

    On my explanation page I have some remarks about the influence of anthro 13C-depleted fossil fuel use (see the message to crosspatch) as fingerprint in the atmosphere. A nice introduction in the 13C cycle is made by Anton Uriarte Cantolla:
    http://homepage.mac.com/uriarte/carbon13.html

    A lot of ocean CO2 data can be found at:
    http://cdiac.ornl.gov/oceans/home.html
    Their publications are of interest.

    More links (but several need an expensive subscription or payment by article):
    http://www.aoml.noaa.gov/ocd/gcc/co2research/
    http://www.sciencemag.org/cgi/content/full/298/5602/2374
    http://www.atmos.colostate.edu/~nikki/Metzl-Lenton-SOLAS_China07.pdf
    http://ioc.unesco.org/IOCCP/pCO2_workshop/Presentations/metzl-SOCOVV-Final2.pps
    http://www.atmos.ucla.edu/~gruber/publication/pdf_files/gruber_thesea_02.pdf
    http://www.agu.org/pubs/crossref/2002/2001GC000264.shtml

    of particular interest (and free!):
    http://www.pmel.noaa.gov/pubs/outstand/feel2331/abstract.shtml

    That Barrow is leading Mauna Loa has two reasons: Barrow represents the atmosphere at ground level (and not so far from tundra, with a short but intensive summer season), where most of the processes of exchange take place. That is exchange with the oceans (more uptake in winter, more release in summer) and vegetation (the opposite of the oceans). The horizontal mixing is quite rapid (a matter of hours to days), but vertical mixing takes more time (days to weeks). As Mauna Loa is at 3,000 m, this gives a delay and a smoothing of the seasonal amplitude.

  183. John McDonald has nailed it . . .

    “Dr. Tans data is being used to support a massive attempt to regulate our lives and potentially cost us all a huge drop in our standard of living – so his data manipulation with no peer review, within 24 hours, no documentation, no impact study, etc. tells me they are NOT a serious science organization”

    Indeed. And how many other changes have been made that we don’t know about ??

    Scientific organization ? I think not.

    Very Mickey Mouse Operation with potentially tragic economic consequences.

  184. As much fun as it is in keeping the climate mob down, one has to wonder if it is something worth fighting for in the long run.

    I would say…let the government have their childish playtime with global warming and everything included (such as CO2 graphs)…and let them be taught a lesson in years to come: when global temperatures cool so on and so forth.

    As for the folks at Real Climate, I tried to post this today. I think it sums up a lot of people’s feelings about their site.

  185. Eh nevermind. Doesn’t wanna work. Not sure why. Oh well. Sorry for the triple post.

    REPLY: Try pasting into notepad to remove formatting, then copy/paste into comment form. – Anthony

  186. Been following your posts. Surprised at the number of individuals you have gathered in such a short time.

    You’ve actually hit upon a subject matter that ties opinion to facts. Not to mention credible sources.

  187. Dee Norris and Ferdinand Engelbeen,

    Thanks for your excellent posts. They go a long way in helping to explain what happened, why, and what the importance or non-importance of this particular event was.

    Kim

  188. @Jack Simmons:

    It’s just the MS Excel Chart function plotting data series from spreadsheet columns. Then do a “Save as Web Page”, and it generates the charts as .GIFs.

    General Question:
    About the issue of constantly-rising CO2 concentrations; what were global temperatures doing 800 to 1200 years ago? Wouldn’t the lag in CO2 response to temperature changes observed in ice-core samples still be happening now, or has something happened to change all that? Are we seeing minor fluctuations from current conditions against a stronger response to a background signal from long ago? I sure don’t have an answer.

  189. Ferdinand Engelbeen:
    Thanks by link; concentration of oxygen in the atmosphere. I really did not expect that the concentration of O2 were rising.

    many chopps

  190. Ferdinand Engelbeen:
    “the mass balance: more CO2 is emitted by humans than is accumulating in the atmosphere. That means that natural sinks (over a year) are larger than natural sources.”

    I’m a bit confused by this. The consensus, amongst some, seems to be that around half of anthropogenic CO2 is absorbed by natural sinks.
    But what then maintained CO2 levels before mankind started burning fossil fuels? Going by the above, one would expect that the downward slope in atmospheric CO2 would have been around the same slope (except in a negative direction), and a few hundred years of that would have resulted in little or no atmospheric CO2. This would probably have resulted in a large-scale dying-off of plants, which would in turn offset the decline in CO2 by a small amount (given that the oceans are the largest CO2 sinks)
    Is there any evidence of such an event?
    And, even if this did happen, how did CO2 levels recover?

  191. niteowl-very interesting graphs…thank you!

    This thread certainly has me looking forward to the data we will be getting in the next few months….

    OK, I’m convinced…Dr. Tan certainly works on top of a volcano and out in the Pacific…anyone else would have noticed that current data releases are instantly subject to reveiw by highly qualified individuals…my heart goes out to him…live and learn….
    Craig

  192. Fernando,

    There is some misunderstanding here: the level of oxygen is reducing, due to burning fossil fuels. The reduction can be calculated, as the amounts and types of fossil fuel which were burned are more or less known, thanks to the tax/sales inventories of many countries (there may be some underestimating due to illegal sales…). But the reduction of oxygen was slightly less than expected. This points to vegetation growth which was higher than vegetation decay over the last decade of the past century. As vegetation growth produces oxygen (and uses by preference the lighter kind of carbon, 12C), the difference between expected and measured oxygen in the atmosphere is what vegetation growth minus decay has net produced and thus is proportional to the amount of CO2 that is netto incorporated into vegetation. That amount is around 1.5 GtC/year which is going extra into vegetation.

    The other point is that the incorporation of the lighter carbon isotope in growing vegetation increases the heavier 13C in the atmosphere, thus vegetation decay is not the cause of the declining levels of d13C in the atmosphere and upper oceans of the past 150 years.

    Battle e.a. in Science used the O2 and 13C decline independent of each other to calculate how much CO2 was absorbed by the oceans and how much by vegetation for the period 1990-1997. Unfortunately there is no recent update of that article. See:
    http://www.sciencemag.org/cgi/content/abstract/287/5462/2467

  193. “The consensus…seems to be that around half of anthropogenic CO2 is absorbed by natural sinks. But what then maintained CO2 levels before mankind started burning fossil fuels? Going by the above, one would expect that the downward slope in atmospheric CO2 would have been around the same slope (except in a negative direction), and a few hundred years of that would have resulted in little or no atmospheric CO2.”-Peter

    Very insightful…

  194. @ niteowl:

    In longer time scales, it appears that CO2 lags behind temperatures by ~800 years. Did we see warming in 1208 AD? If you believe in the MWP, then that sounds reasonable. However, with emissions today, it is probably difficult to know how much is from lag in temperatures. Someone who is better educated in respect to such things might be able to know the difference.

  195. Ferdinand,

    I learned a great deal the last time you visited, so I’m hoping you have the answer to a question that Evan Jones and I were discussing off-blog after your last visit. If human activity has played a significant role in the increasing level of atmospheric CO2 in the last 150 years, what explains the dip in CO2 during in the mid 1940’s when the manufacturing economies of the Allied and Axis powers were ramped up on a war-time footing, 2 medium sized Japanese cities were vaporized, much of Germany was incinerated in firestorms, and an incredible amount of ordinance was expended in a relatively brief period of time. Surely by your logic, CO2 should have seen a spike, not a dip.

  196. @Stan:

    I suspect that CO2 output during WW2 was well below current anthropogenic levels.

    However, aerosol output would have been rather high which might have well effected global temperatures but would require additional data to qualify exactly.

    (Note the weasel words in that last sentence. It means I don’t really know and I need more grant $$$$ to maybe find out!)

  197. If the 800 to 1000 year CO2 lag is credible then this is precisely the sort of signal one would expect. Furthermore, surely it’s not just CO2 that will be released from the ocean in a warming world. Do we have such detailed analysis of other gasses and their trends over the geological timeframe? If CO2 is seen to be doing something different from all the other gasses then I might be inclined to believe this could be a human signal but that assumes that the figures for the carbon cycle are correct.
    I’m still not convinced by the notion that scientists can calculate how much CO2 comes from volcanism, humans, warming oceans, decay, fungi altogether let alone break these figures down with any sort of useful precision. This probably makes me unusual even amongst sceptics. I’m open to being convinced though. What are the margins of error around the carbon sources and sinks?

  198. John McDonald and Fred,

    I shouldn’t say that the Mauna Loa measurements (and for that matter a lot of other CO2 measurements all over the world) don’t have a rigourous quality assurance. Most of it is done automatically, like flagging outliers and calculating averages. Later on, more manual control is done and obvious errors are corrected. Discarding outliers from a trend is done frequently even in process control: if you react on all incoming data from a process, including outliers, one can run into deep trouble…

    Flagging outliers is a part of quality control, as we are interested in more or less global averages and trends, not in local volcanic outgassing, local sugarcane field CO2 use or data from failing equipment. Local CO2 in vegetation (or volcanoes) may be of interest for biologists and scientists who want to study the carbon cycle in more detail, but there are better places to do that: midst the fields and/or forests, where some 400+ stations over the world are at work, or at the mouth of fumaroles…

    If there is one data series in this world where I am pretty sure that the highest standards of data integrity and quality are maintained, then it is the CO2 data from Mauna Loa and the other 9 stations from near the North Pole to the South Pole. You should read the struggle of Dr. Charles D. Keeling to maintain quality of, and the continuous measurements at, Mauna Loa and other places against the different administrations. It really is a fascinating story:
    http://scrippsco2.ucsd.edu/publications/keeling_autobiography.pdf

    About the use of the Mauna Loa data: even if it is quite sure that humans are responsible for the increase in CO2, that doesn’t tell anything about the influence of the extra CO2 on temperature. That is where we differ from the AGW folks…

  199. I suspect that CO2 output during WW2 was well below current anthropogenic levels.

    I wouldn’t disagree, Dee, but clearly WW2 should have resulted in a measurable increase in anthropogenic CO2 compared to the years both immediately before and after. Instead, every ice core CO2 chart I’ve seen for that period shows a dip around 1943-45. Aerosol output might very well have masked any change in global temperatures, but I don’t see how they would have altered the CO2 level.

  200. Has it been noticed that the updated trend now shows three months in row for decline in the trendline? Isn’t that the first time as I thought previously there had never been more than two?

  201. In my humble opinion,
    Mike Bryant’s comments so far are possibly the most concise and important questioning as of yet.
    namely,

    1) How many times in the past have these adjustment algorithms been run?
    How long has Dr. Pieter Tan been responsible for adjustments?

    2) Did his predecessor instruct in the necessity of these wholesale adjustments?
    Are there any records of previous adjustments?

    3) Did Dr. Keeling initiate a protocol that required regular adjustments? ”

    Aaaarh Keeling, Mauna loa and it has to be said again,
    Vostock, of course there is no mixing of data sets..
    But where is the raw data, will we ever, ever see it.
    Probably not, for the newbies to this particular “topic” here’s a link I hope explains and expands what is really being discussed here.
    http://www.rocketscientistsjournal.com/2006/10/co2_acquittal.html#more

    I apologise, it is a long, long article / paper, but the comments that follow are even better,
    and may well explain / expand what the importance of and the probable inaccuracies in the Mauna Loa data set in it’s raw form are and why…
    There is also another thread at the site with Gavin Schmidt’s reponses, and Dr Glassman’s replies….A great/ must read.

    I think it is all too obvious that there almost certainly must already be a history of “adjustments” and mixing of data,
    probably as obvious as a certain other Mann…
    If ever the raw data is released, (but my last penny says it will never be willingly released) we’ll see…
    Good science, not in my books, but hey I’m only a layman.
    (and a tax payer)………………

  202. @Stan:

    If current anthropogenic CO2 is 3% of the increase, what percent would it have been in 1944/45? I don’t think we would be able to distinguish it from the seasonal noise, warming oceans, etc…

    @Derek:

    The raw data is available from 1974 to 2006 as I posted earlier.

    ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/mlo/
    ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/README_insitu_co2.html

    Please donate your last penny to help fight misleading AGW activism and a frequent cause of chicken little disorder.

  203. Peter and Mike,

    Over the Mauna Loa (and South Pole and other stations) period of 50 years, we see a rather constant ratio of 56% of the emissions which accumulate in the atmosphere. The correlation is a near fit (0.99, R^2 0.9988). If we extend that with the period 1900-1960 (based on ice cores and firn CO2-in-air levels) the ratio doesn’t change much (53%) and the correlation is slightly less. See:
    http://www.ferdinand-engelbeen.be/klimaat/klim_img/emissions.gif and

    That means that with zero emissions we have zero increase in the atmosphere too.
    If we may calculate the trend (55% of the emissions) back into time, we end at about 290 ppmv, quite near the 280 ppmv measured in ice cores and firn (Law Dome) around 150 years ago. The increase of about 1°C since the Little Ice Age may be responsible for the 10 ppmv increase difference…

    In the pre-industrial times there was a dynamic equilibrium between CO2 releases (mainly near the equator and rotting vegetation in fall-winter-spring) and CO2 absorption (mainly near the poles and from growing vegetation in spring-summer-fall). This equilibrium changed only with temperature: warmer means higher CO2 levels and opposite for cooling. The long-time ratio is about 8-10 ppmv/°C. If this equilibrium is disturbed by an extra source, then the equilibrium reacts in such a way that the disturbance – to a certain extent – is countered. The type of reaction we see (increased absorbance with increasing emissions) is typical for a simple first order process (like absorption of CO2 in the oceans probably is).

    Niteowl and Ken,

    The lag of 800 years is for large temperature events like an ice age – interglacial transition, it is several thousands of years for the interglacial to glacial transition. On very short time scales (seasonal, year-by-year) CO2 variability (around the trend) nowadays is about 3 ppmv/°C and lags only one to a few months.

    Some time ago, there was a graph on the net (from AGW supporters) which showed the Law Dome CO2 data along some temperature data series of unknown origin. This showed a lag of about 50 years for CO2 after temperature. The temperature data and that graph disappeared from the Internet, but the Law Dome data still are available. A lag of 50 years for (relative) small changes of about 1°C (and 8 ppmv) between the MWP and the LIA seems not unreasonable.

    Stan,

    The Law Dome ice core CO2 data show a small increase (1 ppmv) around 1942, which may hide some larger differences (Law Dome has a 5 years average resolution). But 1 ppmv is within the error margin of the ice core measurements… Other ice cores have less resolution and don’t show such an increase. But I have no knowledge of a CO2 dip in atmospheric CO2 levels in 1940-1945, there were not much reliable direct measurements at that time, and most (land based, with local sources) measurements with chemical methods give a peak of even around 100 ppmv in 1942.

  204. “There are a lot of arguments which point to humans as the source of increasing CO2 in the atmosphere (and the upper ocean layer), see …”:

    I think you completely glossed over the point I made.

    If human CO2 is x per year and atmospheric CO2 climbs at rate y, then if I add 7x CO2, and the rate of increase remains unchanged, I believe it would be a safe bet that the human added CO2 is a negligible contributor overall.

    Imagine I have a swimming pool that is filling at a rate of one inch per hour from a fire hose and a garden hose. Now I add 6 more garden hoses and the water is still rising at one inch per hour. It is safe to conclude that the amount being added by the garden hoses is negligible compared to the fire hose. Now, if when I added the other 6 garden hoses the rate of increase went up measurably, I would conclude that my garden hoses were a significant impact to the total.

    In the case of CO2, a 7 times increase in human emissions has not changed the rate of atmospheric increase one iota.

    You may go on and on all you wish about sources of human emission and I will agree that human emissions have gone up but what nobody has shown is that human emissions have a significant impact in the total CO2 increase (how do you tell the difference between CO2 from coal burned in a power plant vs coal burned in a natural underground coal fire?) or that the increase in CO2 is the cause of anything detrimental. There is more sound evidence that CO2 increase is a result of warming than there is that CO2 is a cause of warming.

  205. According to this article in The Smithsonian, the coal fires in China alone are equal to 1% of all fossil fuel emissions on the planet.

    Another article in New Scientist from 2003 says:

    “Estimates for the carbon dioxide put into the atmosphere from underground fires in China are equivalent to the emissions from all motor vehicles in the US,”

    There are absolutely huge coal fires burning in India, Indonesia, and the US. If putting them out would save the equivalent of all the cars in the entire US, then lets do it! THAT is a CO2 reduction investment I can get behind. It not only reduces CO2 but it conserves energy resources for the future.

  206. Crosspatch,

    I don’t know where you have found the data which show that there is a constant increase of CO2. The increase is not linear, in average it is 56% of the accumulated emissions, that means that over the years, there is an increasing increase of CO2 in the atmosphere, in ratio with the emissions, which are increasing year by year too.
    In 1960 (start of the Mauna Loa measurements), the emissions were around 2.5 GtC/yr the increase of CO2 was about 0.8 ppmv that year. In 2003 we had emissions around 7.3 GtC/yr, the increase in CO2 was 2.5 ppmv…

    Of course, there are variations around the increase, caused by temperature variations (and precipitation), but in general, these level out after a few years, except if there is a permanent warming or a permanent cooling. From the long-term past we know that the increase/decrease of CO2 follows temperature changes with 8-10 ppmv/°C. Current, short-term variations (El Niño, Pinatubo) show about 3 ppmv/°C around the trend. Thus if you want to explain the 70 ppmv increase of the past 50 years, you need to warm the oceans with 7 to 23 °C…

    That underground coal fires may be a huge source of CO2 is possible, should be stopped (if anyway possible). If that is only 1% of the total fossil fuel use, then the ratio of human CO2 accumulation in the atmosphere reduces to 55.5% of the emissions. Not a big deal. Again, if that has an important influence on temperature, that is an entire different question than the origin of the increase in the atmosphere.

  207. But I have no knowledge of a CO2 dip in atmospheric CO2 levels in 1940-1945, there were not much reliable direct measurements at that time, and most (land based, with local sources) measurements with chemical methods give a peak of even around 100 ppmv in 1942.

    Ferdinand,

    Beck’s paper, based on chemical methods, contains the only graph I could find that shows a well-defined dip in the mid 40’s. Evan Jones and I did a considerable amount of research about the historical CO2 record earlier this year after reading the Beck paper. I thought I had all the graphs I came across showing the dip in the mid-40’s archived, but all I could find was this from an email to Evan in March:

    Next I did a search for “CO2 measurement + graphs” including variations like “pre-Mauna Loa”, “historical”, etc.. After looking at a couple dozen graphs that were all “smoothed”, my head began to hurt. OK, a couple Ibuprofen and back to the search. I did end up finding several graphs that merged pre-1958 ice-core data with post-1958 Mauna Loa data, and they do, indeed, show a dip during WWII. I concur with your assessment — Watts up with that? The one thing I did not find is continuous ice-core data up to present to see if it tracks Mauna Loa, because the rate of increase picks up significantly at the point where the two data sets converge.

    Perhaps if Evan is monitoring this thread he has some active links archived.

    I do concur with your earlier statement:

    About the use of the Mauna Loa data: even if it is quite sure that humans are responsible for the increase in CO2, that doesn’t tell anything about the influence of the extra CO2 on temperature. That is where we differ from the AGW folks…

  208. The Mauna Loa CO2 graps going back to 1960 show a steady increase in CO2 that does not at all match the change in human emissions at that time.

    Here is a link to the graph from NOAA. The rate of increase has been basically stable over that time even though human created CO2 emissions have increased seven fold over that timeframe.

  209. In other words, in 2007 we were emitting 7 times what we were emitting in 1960 and the annual increase now is just about the same as it was then. We have added seven garden hoses and the rate of water rise in the pool is unchanged.


  210. “The rate of increase has been basically stable over that time even though human created CO2 emissions have increased seven fold over that timeframe.”

    Yet the “global temperature” has fluctuated over periods longer than a year, the 60s -70s stands out. Would this not affect the level of “well mixed” CO2 enough to register on this graph? Or is it just a coincidence that fluctuating CO2 levels affected by varying temperature happen to match changes in emissions.

  211. crosspatch,

    a very powerful point you make. Thanks for patiently explaining it. Not everyone has steel trap minds.

  212. “If putting them out would save the equivalent of all the cars in the entire US, then lets do it! THAT is a CO2 reduction investment I can get behind.” crosspatch

    Maybe use CO2, that would be fighting CO2 with CO2. It is heavier than air. It might effectively sequester itself too and also would prevent future fires unless it leaked away. A lot of weasel words, to be sure.

  213. Crosspatch,

    Even the graph of Mauna Loa trends shows that the rate of increase in the 1960’s was smaller than in current times. It is not a straight line! If you go to the data themselves (see http://www.esrl.noaa.gov/gmd/ccgg/trends/index.html#mlo ), you will see that the yearly increase in the 2000’s is about a threefold of the increase in the 1960’s. The same for the emissions: there is about a threefold increase of emissions between the 1960’s and the 2000’s.

    In fact the increase of CO2 in the atmosphere follows the accumulated emissions in such a lockstep (56% of the emissions remain – as mass – in the atmosphere), that it is near impossible that this is due to any natural process. A natural process that follows the emissions with such an accuracy simply doesn’t exist. And temperature has a much worse correlation with increasing CO2 levels.

    For the graphs see:

  214. Stan,

    I had a lot of discussion with Beck about his historical data. Besides the accuracy, the main problem is that most measurements were taken at places with huge local sources/sinks. And there is not one series which continues (or repeated) measurements at the same place over a longer period. Thus his “trend” is the result of a mix of places with completely different local circumstances at different times.

    Some better indication of pre-Mauna Loa CO2 data is from ice cores and firn. Law Dome has a quite high resolution (5 year averages) and even more interesting, there is an overlap of about 20 years between the ice core gas age and atmospheric CO2 data from the South Pole. This shows that the South Pole CO2 data and the ice core are in line with each other within the accuracy of the ice core measurements (+/- 1.2 ppmv). See:

    Two out of three Law Dome ice cores show a 1 ppmv peak around 1938 (sorry, remembered that it was around 1942). This may give a false impression of a dip around 1945, but it is within the error margin of the measurements…

  215. Ferdinand Engelbeen

    Flagging outliers is a part of quality control, as we are interested in more or less global averages and trends, not in local volcanic outgassing, local sugarcane field CO2 use or data from failing equipment. Local CO2 in vegetation (or volcanoes) may be of interest for biologists and scientists who want to study the carbon cycle in more detail, but there are better places to do that: midst the fields and/or forests, where some 400+ stations over the world are at work, or at the mouth of fumaroles…

    Are there better places to measure atmospheric CO2 than the top of a large volcano? maybe the top of an “Alp”? Or a” rocky”? That’s like choosing to measure pollution near a motorway and claiming this measure is the norm for the rest of the countryside simply because the wind was blowing in the right direction for a safe measurement. Whilst we all understand the need for meaningful statistics and pristine curves question the validity of the data before it even reaches the QC stage. My guess is that measuring from ML is an exercise in proving that CO2is well mixed when comparisons are made with other stations in other regions.

    To paraphrase Crosspatch, and quote Piers Corbin,
    “if you “P” in a lake, does the level go up? The accurate answer is “yes” but can we measure it?“
    Let alone the question of whether it is responsible for modern day warming.

  216. Two out of three Law Dome ice cores show a 1 ppmv peak around 1938 (sorry, remembered that it was around 1942). This may give a false impression of a dip around 1945, but it is within the error margin of the measurements…

    Thanks, Ferdinand.

  217. Joy,

    There are no ideal places to measure “global” CO2, but there are several places which may be deemed good. The best place in fact is the South Pole, far from vegetation and far above and far away from the only volcano in the (wide) neighbourhood. If it only was not so cold there… You may imagine the technical/personal difficulities one encounters in the Austral winter to take samples there.

    So every place has its own specific problems. Barrow has low readings in summer when the wind is from land side (tundra), La Jolla also if the wind is from land side. Mauna Loa sometimes from fumaroles but more frequently in the afternoon from upwind conditions. Despite that, from the 8600 measurements per year (Mauna Loa), some 60% are good enough to give a profile of what happens with CO2 in the atmosphere.

    The 10 base stations in use all are situated in or at the edge of oceans and/or at high altitude, where there is little variation due to vegetation and/or other local sources/sinks. After quality control, the remaining trends all are identical and all within 5 ppmv of each other. The difference is mainly from near surface to altitude and between the NH and SH. Maybe the highest parts of the Alps (Jungfraujoch) or one of the Rockies can be better, but the question is if it is necessary, because the existing stations do work good enough.

    Since the Mauna Loa measurements started, CO2 levels increased with 20%, which is a little more than “P” in a lake. If all emissions should have remained in the atmosphere, the increase would have been 35%. Maybe there are other causes of the increase, but where have the emissions gone if not partly into the atmosphere, the rest into oceans and vegetation?

    Again, admitting that we are the cause of the increase has nothing to do with admitting that CO2 has a huge influence on temperature, that are totally separated items.

  218. Engelbeen (09:28:45) :

    “So every place has its own specific problems. Barrow has low readings in summer when the wind is from land side (tundra), La Jolla also if the wind is from land side. Mauna Loa sometimes from fumaroles but more frequently in the afternoon from upwind conditions. Despite that, from the 8600 measurements per year (Mauna Loa), some 60% are good enough to give a profile of what happens with CO2 in the atmosphere.”
    Its that part “good enough to produce a profile of what happens with CO2…” with which I have a problem. I still do not understand why 60 percent is good enough, but more importantly why are the other 40 percent not good enough. Why can we not have nearer 100 or 90 percent. The Alps and Rockies aren’t inaccessible. Since when was 40% failure rate good enough? How about the middle of the Sahara? More importantly, how does one know a good sample when one sees one? What are the criteria for rejection?

    “The 10 base stations in use all are situated in or at the edge of oceans and/or at high altitude, where there is little variation due to vegetation and/or other local sources/sinks. After quality control, the remaining trends all are identical and all within 5 ppmv of each other.”
    So the samples are chosen (by QC) that are within 5ppmv of each other? And this difference is 3 times the quantity that the CO2 has been said to rise on an annual basis? It sounds to me like the difference is far greater than 5ppmv if one allows all the data to be included. Hence my question as to why 60% is good enough.

    The difference is mainly from near surface to altitude and between the NH and SH. Maybe the highest parts of the Alps (Jungfraujoch) or one of the Rockies can be better, but the question is if it is necessary, because the existing stations do work good enough.”

    Again, “do work good enough” on whose authority? Who decided that this was good enough?

    “Since the Mauna Loa measurements started, CO2 levels increased with 20%, which is a little more than “P” in a lake.”
    But who determines that the 20% is from humans?

    “If all emissions should have remained in the atmosphere, the increase would have been 35%. Maybe there are other causes of the increase, but where have the emissions gone if not partly into the atmosphere, the rest into oceans and vegetation?”
    Why could not all of the emissions have gone into the oceans and the vegetation?

    “Again, admitting that we are the cause of the…” it should be a matter of proof, not admission. I have never confused the issue of accuracy of atmospheric CO2 data with human accountability for the twentieth century’s modest warming trend. It is obvious that the latter statement can be falsified without the data accuracy being questioned. This is not an excuse to ignore the data accuracy.
    I am grateful that you took the time to reply to my questions.

  219. This graphic gives a clearer picture of what I believe is really going on. The change in atmospheric CO2 tracks in near lockstep with with the change in sea surface temperatures over that period.

    Over this period we had positive PDO, ENSO, and NAO. As water warms, it outgasses CO2. As surface water warms up, it will dump excess CO2 into the atmosphere meaning that the resulting increase in CO2 is caused by the ocean warming and not the other way around. The rate of increase in atmospheric CO2 looks *nothing* like the rate of increase in human CO2 emissions but does look identical to the rate of increase of global SST.

    To test that idea, lets have a look over the next few years as temperatures cool. We currently have a cold PDO, neutral ENSO, and cold NAO. Global SST should drop or possibly go negative. And as we have seen, the rate of CO2 increase has, in fact, declined even with the adjustment. As we watch it over the next 12 to 24 months, I fully expect it to continue to track with global SST anomaly.

  220. Joy,

    You should definitively read my page about several reasons why the increase of CO2 in the atmosphere is largely from the emissions (there is a small addition due to the temperature increase 1900-2000, but that is reversing now):
    http://www.ferdinand-engelbeen.be/klimaat/co2_measurements.html

    The mass balance is the ultimate proof of the contribution of humans to the increase of CO2 in the atmosphere. As long as more is emitted than there is increase, there is no net addition from nature. That not all emissions are absorbed by the oceans/vegetation is a matter of process equilibrium: one need a driving force to push CO2 from the atmosphere into the oceans(/vegetation), which is only possible if the pressure of CO2 in the atmosphere (pCO2atm) is higher than the average pCO2 of the oceans. Thus levels in the atmosphere must go up first. The higher the pressure difference, the more CO2 is absorbed.

    With increasing emissions, the pressure difference will go up, but never will reach a level where all emissions are absorbed. If we should stop the increase and the emissions should stay at a certain level, then after some time, the pressure difference may increase to a value where as much CO2 is absorbed by the oceans as is emitted by humans. Thus a new equilibrium is reached. This is typical for a simple first order (physical absorption) process.

    Another quite solid indication is the decline of 13C in the atmosphere and upper oceans. Vegetation can’t be the source of extra CO2 (and the cause of the decline), as the oxygen deficit points to more vegetation uptake than decay. The oceans can’t be the source of extra CO2, as that has a zero to positive impact on 13C levels, while we see a decline. Moreover, CO2 levels in the upper oceans increase too and the pH is declining.

    Thus there is only one known source of the increase: human emissions…

    My impression is that several skeptics don’t like the idea that we are responsible for the increase in the atmosphere, just because if that weren’t true, that would falsify the whole AGW theory…

    The different stations all have QC for their own data, which rejects data for daily, monthly and yearly averages, based on objective criteria. Some are general: large spikes to either side (values outside 3 sigma of the previous series of values) and instrument malfunction (calibration with standard mixture outside the tolerance). Others are site specific: land side wind (Barrow, La Jolla), upslope wind (Mauna Loa), etc… Daily and monthly averages in the NH may differ 10-15 ppmv from SH averages, due to seasonal influences, yearly averages are all within 5 ppmv of each other.

    Differences between subsequent measurements usually are within tenths of a ppmv, there is no QC adjustment or comparison with measurements of other stations other than that all use calibration mixtures which are calibrated to the same central standard. Thus while all stations show the same trends (and nearly the same increase each year compared to each other), there is a bias in the trends, where the SH stations lag the NH stations. This points to a CO2 source in the NH. The ITCZ forms a barrier which hinders the tranport of CO2 between the hemispheres. That is the reason that there is a current (and slightly growing) difference of about 5 ppmv in the SH trends vs. the NH trends.

    Even if one includes all outliers, that doesn’t change the trend with more than a tenth of a ppmv. In fact, it is a luxury problem: there are so many good data that we may choose the best available. One can afford to reject any suspect data (for whatever reason), as that doesn’t affect the only thing we are interested in: the trend of CO2 in the atmosphere on longer term.

    For the South Pole, we don’t have that luxury, as besides a brief continuous period, only biweekly flask samples are taken, which is enough to measure the trend. Even there one sees sometimes one flask from a triple with spikes of +50 ppmv, unexplainable…

  221. Crosspatch,

    Endersbee compares 12 month averages of CO2 with a moving 21 year average of SST. Sorry, but you can enhance any spurious correlation with techniques like that…

    Further the graph shows a good correlation, because it compares temperature and CO2 levels where both go up in the period 1985-2008. If you include the period 1900-1985 (ocean cooling 1945-1975, CO2 up) the correlation is far worse…
    See: http://www.ferdinand-engelbeen.be/klimaat/klim_img/temp_emiss_increase.jpg

    One need to compare yearly averages with yearly averages. The correlation then is far less, the causation in part right, but limited: some 3 ppmv/°C. That is based on the influence of temperature on the variation around the trend during the 1992 cooling (Pinatubo) and the 1998 warming (El Niño).

  222. I’ve not read this entire thread, but it occurs to me that taking CO2 measurements on an active volcano that releases CO2 gas from various vents isn’t exactly the best place to take these readings, is it? Sort of like taking temperature measurements next to man made things like buildings that can influence the results.

  223. Pingback: Mauna Loa CO2 record posts smallest yearly gain in its history - maybe « Watts Up With That?

Comments are closed.