by: Geoffrey H Sherrington
Scientist.
Melbourne Australia.
20th April 2022.
The short story: Can we detect a change of CO2 in the air after emission reductions following Covid-19 lockdowns?
No, we cannot, because the present measurement of CO2 in the air has errors and uncertainties that are too large to allow detection of the estimated change.
These measurement deficiencies likely arise partly from cherry picking of raw data, a problem that is widespread in climate research, making much of it eminently contestable.
……………………………………
This adds to the WUWT article of a year ago, about the Covid-19 lockdowns, their effects on estimated emissions of CO2 and whether any change is detected in the CO2 measurements at Mauna Loa, Hawaii (MLO).
The Global CO2 lockdown problem – Watts Up With That?
…………………………………………….
THE UNCERTAINTY OF CO2 ANALYSIS.
Uncertainty means different things to people in climate research. It should not, because it is defined at length in publications such as those from the international Bureau of Weights and Measures, BIPM, Paris.
At one extreme, one can use a modern analytical chemistry instrument designed for CO2 analysis, hit the start button 100 times, take a standard deviation and announce a high precision, sometimes confused with high accuracy. This approach tends to reflect little more the voltage stability of the instrument and does not help to understand climate.
At the other extreme, one can measure the CO2 over a wide range of operating conditions in the raw environment, trying to hold extraneous variables constant, to measure an operational accuracy to put into the larger uncertainty context. Some examples follow.
Many CO2 laboratories now use an IR laser cavity-ring-down spectroscopy device. One maker, Picarro, summarises instrumental performance.
We determined the measurement precision by repeated measurements of gas flowing from the 380-ppm CO2 cylinder at room temperature. A spectral scan was taken every 5 min. The standard deviation is 0.093 ppm CO2. Over an ambient temperature range of 35°C to 20°C, the measured standard deviation degraded to 0.14‑ppm CO2 ….
A year ago, New Zealand’s NIWA emailed me about their CO2 measurements at Baring Head.
“The CO2 mole fractions for the eight long-term transfer standard calibration gases are determined by the WMO Central Calibration Laboratory (CCL), with an estimated uncertainty of ±0.07 ppm (1-sigma) with respect to the WMO scale”.
(It is usually found that the instrument performance figure will be smaller than the laboratory operational figure).
Two groups measure CO2 at Mauna Loa, the USA government’s National Oceanic and Atmospheric Administration (here NOAA) and the Scripps Institute of Oceanography (here Scripps). The NOAA group continues to claim this:
Global Monitoring Laboratory – Carbon Cycle Greenhouse Gases (noaa.gov)
- The Observatory near the summit of Mauna Loa, at an altitude of 3400 m, is well situated to measure air masses that are representative of very large areas.
- All of the measurements are rigorously and very frequently calibrated.
- Ongoing comparisons of independent measurements at the same site allow an estimate of the accuracy, which is generally better than 0.2 ppm.
(my bold; NOAA Updated December, 2016; March 2018, September 2020, accessed 13th April 2022.)
Both NOAA and Scripps have posted public data for daily, weekly and monthly ppm CO2 mole fraction in dried air. (Some results are from in-situ measurements, others are performed after collection of air in flasks).
Here is a graph showing the analysis difference in ppm between the 2 laboratories on the same day, in situ samples, for year 2020:

Note that the Scripps results are, on average, some 0.3 ppm lower than NOAA. “Rejects” are discussed below.
For a longer snapshot, here is a similar graph for the weekly results, for years 2017 to 2021 incl., plus the first 3 months of 2022.



An offset of 0.3 ppm persists, to show NOAA to be higher on average. However, these weekly results can scatter about this mean by up to 1.5 ppm, clearly indicating that one lab or the other (or both) is working outside the NOAA-claimed 0.2 ppm accuracy in this example.
There is a pattern to the differences. NOAA is higher in the early and late parts of the year, with Scripps higher in mid-year. This allows an inference that the difference involves seasons and maybe the way that outliers are treated in the lab. NOAA continues to publish this figure and explanation about the way the accept/reject results that do not satisfy defined criteria. (Both Scripps and NOAA appear to employ some form this accept/reject filtering.)
Assuming that normal statistics apply, the weekly figures graphed above would seem to show an overall, useful accuracy more like +/- 0.9 ppm, which is twice the standard deviation of 0.45 for the 275 numbers plotted in the weekly difference graph above. Their mean is -0.29 ppm.
“REJECTS”.
NOAA describe their selection method for treating measurements they consider affected by adverse effects in this already-quoted link:
Global Monitoring Laboratory – Carbon Cycle Greenhouse Gases (noaa.gov)



The colour coded dots are defined.
V MEANING: The standard deviation of the 5-minute mole fraction averages should be less than 0.30 ppm within a given hour. A standard deviation larger than 0.30 ppm is indicated by a “V” flag in the hourly data file, and by the red color in Figure 2.
U MEANING: Hours that are likely affected by local photosynthesis (11am to 7pm local time, 21 to 5 UTC) are indicated by a “U” flag in the hourly data file, and by the blue color in Figure 2.
D MEANING: Data where this hour-to-hour change exceeds 0.25 ppm is indicated by a ‘D’ flag in the hourly data file, and by the green color in Figure 2.
S MEANING: After the application of the ‘V’, ‘U’ and ‘D’ flags, there can be times when a single hour remains unflagged, but is bracketed by flagged data. This makes it unclear if this single hour could be representative of background air or not. We therefore apply a ‘S’ flag to these single hours. Pink color.
It is plausible to infer that the difference between Scripps and NOAA arises from the subjective choice at each lab on what to accept and reject, but this is a surmise that would require a purpose-designed inter-laboratory comparison to firm up.
More insight can be gained by examination of the change on CO2 from day to day, sometimes called a “first difference” analysis. The next graph shows NOAA and Scripps again, the same data as above, in first difference daily form.



The distribution of the first difference values is visually different. Scripps seem to have comparatively fewer mid-range values between 1 and 1.5 ppm either side of the zero line. NOAA tends to hug this line, as intuition would suggest it should. Missing data are assigned a value of -2 for graphing purposes here.
Climate researchers in general tend to use more subjectivity than is found in the hard sciences – and it seems to lead to problems.
Further to the CO2 uncertainty just shown, normal laboratory procedure would involve the determination of CO2 in dry air by other analytical chemistry methods. In the final analysis, one could compare results at a given time at a number of locations, by a number of different analytical methods, by different operators and by different instruments. This would give (more or less) the ultimate, practical uncertainty – but there would be justified dissent from those who claim to know why there are differences between sites like Point Barrow Alaska, Alert Canada, LaJolla California, American Samoa, Cape Grim Tasmania and the South Pole – all of which have high quality existing analyses for CO2.
Here, from the Kenskingdom blog, is a time series graph of the difference between Mauna Loa CO2 and the others named. By eyeball, the 2 sigma calculation would be about 5.5 ppm at a given time (and increasing).
https://kenskingdom.wordpress.com/2020/06/



In summary, it is said in some papers referenced below that the lockdowns were expected to show a CO2 decrease of about 0.2 ppm over part or all the year 2020, compared to 2019 and/or earlier years. That has to be put into context with the various uncertainties of actual measurements just discussed, with 2 sigma values ranging in ppm CO2 from 0.14 to 0.1 to 0.2 to 0.9 to 5.5 ppm.
It is simply scientifically incorrect to draw conclusions from measurements that are beyond the ability of the measurement process to produce.
……………………………..
SOME ESTIMATES OF LOCKDOWN REDUCTION OF CO2.
Recently, Dr Roy Spencer has examined CO2 changes at MLO during the early Covid Lockdown.
The model match to observations during the COVID-19 year of 2020 is very close, with only a 0.02 ppm difference between model and observations, compared to the 0.24 ppm estimated reduction in total anthropogenic emissions from 2019 to 2020.
……………………………
NOAA has written this.
https://gml.noaa.gov/ccgg/covid2.html
If emissions are lower by as much as 25%, then we would expect the monthly mean CO2 for March at Mauna Loa to be lowered by about 0.2 ppm, and again in April by another 0.2 ppm, etc. Thus, when we compare the average seasonal cycle of many years we would expect a difference to accumulate during 2020 after a number of months. The International Energy Agency expects global CO2 emissions to drop by 8% this year. Clearly, we cannot see a global effect like that in less than a year.
………………………….
Rob Monroe from Scripps offered this analysis dated 3rd May 2021.
Why COVID Didn’t Have a Bigger Effect on CO2 Emissions | The Keeling Curve (ucsd.edu)
The COVID-19 pandemic caused carbon dioxide (CO2) emissions from fossil fuels to drop in 2020 by seven percent compared to 2019. This decrease in emissions slowed the increase in atmospheric CO2 compared to what would have occurred without the pandemic.
It was too small and too brief, however, to stand out strongly in individual CO2 records, such as the Keeling Curve.
In 2020, CCO2 increased by 2.0 parts per million (ppm) at Mauna Loa as concentrations approached 420 ppm. This estimate uses a two-month average centered on Jan. 1, 2021 compared to the similar average one year earlier. The 2020 increase was 22 percent lower than the increase in 2019 of 2.54 ppm, but it was not markedly lower than in other recent years. In 2014, CO2 also increased by 2.0 ppm, and in 2017, CO2 increased by 2.1 ppm. The highest year-over-year growth on record was in 2016, at 3.0 ppm.
……………………………………….
Authors of a November 2020 press release from the World Meteorological Organisation surmised.
Geneva, 23 November 2020 (WMO) – The industrial slowdown due to the COVID-19 pandemic has not curbed record levels of greenhouse gases which are trapping heat in the atmosphere, increasing temperatures and driving more extreme weather, ice melt, sea-level rise and ocean acidification, according to the World Meteorological Organization (WMO).
This WMO conclusion is not justified. It is not known if the Covid reduction could be detected, for reasons given above.
CONCLUSIONS.
It can be seen that the calculated or expected reduction in airborne CO2 from the Covid lockdowns is generally of the order of 0.2 ppm spread over several months in year 2020.
This reduction seems close to – if not smaller than – the uncertainty of the measurements examined here for Scripps and NOAA.
However, there is much uncertainty in the three main parts of this exercise.
1.The reduction in emissions from lockdowns etc is not well known and is usually expressed with many qualifications in papers accessed to date.
2. The airborne fraction of CO2 attributed to emissions from mankind remains speculative between wide bounds.
3. The uncertainty of analysis of CO2 in the atmosphere is uncertain because uncertainty is poorly defined (and often poorly understood) in climate research papers.
Therefore, there is little probability that the effect of lockdowns after Covid-19 on measurements of airborne CO2 will be accurately detected using current published/publicised methods.
END NOTE.
Climate research has a major credibility problem. It is shown when a detailed examination of claims about climate change are examined in detail. There is seldom a proper estimation of precision, error or uncertainty reported. Where these uncertainties are reported, they are very often shown ‘at their best’ with data and methods that would allow proper reporting are downplayed, excused or simply not mentioned.
One poster child for this problem is the so-called “Hockey Stick” of Mann, Bradley and Hughes, 1st April, 1998.
Global-scale temperature patterns and climate forcing over the past six centuries | Nature
Deep analysis by Steven McIntyre and Ross McKitrick , 1st November, 2003 (and later) revealed scientific deficiencies of types similar to the ones reported here for CO2 data.
https://doi.org/10.1260/095830503322793632
Such deficiencies have consequences. Global energy production is currently in turmoil, partly as a consequence of such deficient science. Also, if reductions in airborne CO2 following Covid-caused emission reductions, how are we going to monitor mandated emission reductions?
Authors have to cease and desist from cherry picking, concealment of adverse data, misrepresentation of uncertainty and reluctance to respond to criticisms of their work.
NOAA could say that they can measure the content of CO2 in a gas mixture in a cylinder to within 0.2 ppm. Real world samples represent a more difficult challenge. Their accuracy on real air is probably closer to +/- 1 ppm and yet they report results to 5 significant figures, which in my opinion is unjustified.
Regarding reproducibility, i.e.,agreement between multiple labs, there is a clear bias between NOAA and Scrips measurements. To be fair, it’s not that bad.
Measurements from different sites are also more challenging because while CO2 is approximately well mixed, it is not completely so.
Making a measurement of a small difference on a large signal is inherently statistically difficult.
That’s what ‘anomalies’ are for! 🙂
Clyde,
No! Many fundamental error processes are still there in the numbers, whether they are raw or have a local constant subtracted from them. Granted, some effects can be reduced (such as roughly levelling air temperatures from stations at different altitudes) but by no means all of the errors are eliminated by the anomaly method. Geoff S
Really great post, Geoff.
Understanding uncertainty in data and instrumental resolution are so basic to science, and so willfully ignored in consensus climatology.
Remember that paper about testing moon rock standards in the several labs, and how different labs got different results from the same sample?
A discussion of that would make the same point. It would be a great post here on WUWT.
Hi Pat,
yes, I have thought about that moon soils paper by George Morrison. It has the required lessons but it is getting a little old now. Will have another think. Regards. Geoff S
You left off the /sarc tag
Yes, but I did use a smiley, which most people seem to have missed. I’m not surprised that you understood. However, I am surprised that most everyone else didn’t understand.
Clyde,
Sorry, did not see the emoji in time.
Geoff S
The last number of years, starting in January, the Keeling curve goes up by about 4 ppm by May, then down 6 by October, then up 4 by the next January for about 2 ppm increase each year. If you attribute the entire 2 ppm to fossil fuel CO2, a drop in emissions of 10% for economic downturn is 10% of 2 ppm= .2 ppm, which is less than the sampling error….so not going to show up with any certainty unless the downturn would last about 4 years….
Hoping someone can help me understand why an observation/CO2 monitor station is located on an active volcano?
It’s in the middle of the ocean and at high altitude. They are able to take advantage of regular wind patterns to obtain “uncontaminated” samples. They do have to reject some data when the winds do not cooperate.
Guaranteed results
weird that it matches with other stations then
That’s what Pat said – guaranteed results.
I suggest you go to their site and find out what is done.
If you are serious about understanding aspects of the global warming issue, you really need to do this. Also, as noted in this post there are other measurements from diverse geographic sites, and they do communicate.
This particular post has a very specific topic related to the Covid Panic. It is not directly related to the role (or not) of CO2 and Global Warming.
Bob,
Many locations where a monitor station could be put have local or extraneous sources of CO2 wafting around them. The tendency has been to select isolated places like the South Pole, Alert in Nth Canada, Cape Grim on the west coast of Tasmania. Observations at Mauna are described to be affected by extraneous CO2, including from local vegetation changes. CO2 from the nearby volcanos can be dealt with by adjustment, as they seem quite small, but as I show above, adjustments harm uncertainty. Life is full of compromises. Geoff S
Bollocks … the satellites show CO2 is NOT a well mixed gas … full stop …
“Well mixed” compared to what? The satellites show that the CO2 varies by something on the order of ± 1.3% over the earth, mostly from north (more) to south (less). So it is mixed to within ± a couple of percent.
Water vapor, on the other hand, varies from 38.5 kg/m2 of total precipitable water (TPW) in the tropics, down to 2.6 kg/m2 TPW in the Antarctic, with a mean of 24.3 kg/m2.
This is a variation of +58% to -89% … compared to ± 1.3% for CO2. So compared to the major greenhouse gas, water vapor, CO2 is absolutely well-mixed.
Regards,
w.
The Averaged Carbon Dioxide Concentration (graphically shown from around the world) is used to show that the CO2 is well mixed (throughout the the atmosphere?) because … the average is doesn’t vary much? 🙂
Something that isn’t generally appreciated is that all the buildings in Pt. Barrow are heated by natural gas. People commonly used open burn barrels for their trash. There is a small airport in the vicinity which allows the landing of cargo and passenger aircraft. Almost everyone owns at least one snowmobile, and pickup trucks and larger are common. I don’t know about the individual homes, but the army base I stayed at incinerated the feces of workers in what were called ‘toilets,’ resulting in a pervasive odor of burned flesh throughout the base. I can’t help but believe that the background CO2 is elevated compared to the open tundra tens of miles away.
Were the feces not excreted from the humans before burning?
Man, they breed tough soldiers up there.
they burned the waste in Nam the same way. For those who have never smelled the stink they have no idea how bad that it smells.
Yeah I know.
Burning off the shit-pit was what every gormless private (me!) got to do at field base camps.
In Barrow, the toilets had gas-fired blow torches built in. When the lid was closed, the torch fired up automatically. One almost always had the luxury of sitting down on a warm seat, even though it was freezing outside.
Some years ago I read a book about professor Charles Keeling, an oceanographer who reported on carbon dioxide. His first discoveries were in ice cores at the foot of Mount St Helens, a volcano in North/west USA. He obtained a position with the American Antarctic expedition and found carbon dioxide in ice cores at the foot o Mount Erebus, a volcano in Antarctica. These readings were 3.5 parts per million. He believed the CO2 came from China. He left that service in Antarctica and took some air samples from Cape Grim on North West Tasmania, not far from the Bass Straight oil fields. I don’t remember if he obtained a CO2 reading at that site.
Correction to above – it was at Mauna Lua in Hawaii he made his first discoveries. See Charles David Keeling – Wikipedia
https://en.wikipedia.org/wiki/Charles_David_Keeling
Charles David Keeling (April 20, 1928 – June 20, 2005) was an American scientist whose recording of carbon dioxide at the Mauna Loa Observatory confirmed Svante Arrhenius’s proposition (1896) of the possibility of anthropogenic contribution to the greenhouse effect and global warming
In addition to the things Scissor cites. It’s in a politically stable region. It’s a long way from sources of local industrial emissions, there’s a decent road to near the top, and while it gets some snowfall, the road should be usable most of the year. There aren’t a lot of alternatives, and the ones that come to mind seem as bad or worse.
Those are good points.
I would also add that, if you are going to pick anywhere in the world to live, Hawaii looks nice.
Failure to take measurement uncertainty as a real thing is common. The most common example of this is with historical temperature readings, where the thermometer was only accurate to at best one full degree, and could not be reliably read to anything finer than one full degree. Thus two or so significant digits in the data. Using derived data with four places to the right of the decimal is ludicrous.
The CO2 measurements seem to be going down that path.
failure to understand how temperatures are calculated is even more common.
Failure to take systematic measurement error into account is universal.
“Failure to take systematic measurement error into account is universal.”
Not a real problem for evaluations of most of the parameters under current discussion..
W.r.t. temps, even the most pessimistic estimates of “systematic measurement error”, such as those from your 2010 paper, don’t qualitatively change the effects of the trend standard errors calculated from expected values with no consideration of the “systematic measurement error” estimates. That’s why your arm waving in your paper about “Look how big they are!” didn’t come with an actual evaluation of how much they actually widened the range of likely trends.
Standard Error only provides an interval within which a mean may lie. The smaller the value of the Standard Error (which is really the Standard Error of the sample Means) implies that the estimated mean is very close to the population mean. It has nothing to do with measurement errors, uncertainty, or precision as defined by Significant Digit rules.
You’ve been told this numerous times and still fail to believe or understand.
Lastly, how do you know where a trend lays when any value within the uncertainty interval is just as possible as any other value. Do you think subtracting a number to get an anomaly somehow wipes out the variance, error, oruncertainty in the original measurements?
“ It has nothing to do with measurement errors, uncertainty,…“
Wut? So measurement error of individual data points can not increase the standard error of a trend of them? Uh, not too thoughtful Mr. Gorman.
FYI, the undocumented conclusion to Dr. Frank’s 2010 paper was that those errors not only increased the standard errors ofT any of the relevant resulting trends, but that they increased it so much that Dr. Frank didn’t need to quantify by how much. He just threw all the cards over and proclaimed that they blew up any attempts at trending.
“Lastly, how do you know where a trend lays when any value within the uncertainty interval is just as possible as any other value.”
Without realizing it, it seems, you are describing equiprobable distributions.
Have you ever calculated the odds that you 2 Gormans and Pat Frank are wrong, and the rest of us aren’t? I agree that it’s possible, but please tell me the computer you used to find out how possible….
by the rest of us, do you mean you
andare mosher?Word salad with no meaning. Measurement error certainly will propagate through any calculation that is done. What you will end up with is an SEM ± the propagated error. That doesn’t even begin to address the uncertainty in the measurements. What you miss is that each station is a sample of the population and SHOULD consist of a distribution similar to the population. That means you should be sampling temperatures from a large number of stations across hemispheres in order to obtain a valid random variable Xi. Yet, some how we continue to see SEM’s with 4 decimal digits when averaging anomalies. That tells me that neither the measurement error nor the uncertainty is being propagated properly.
Don’t put words in my mouth. I asked a legitimate question and you decided to deflect to another issue entirely.
Equiprobable distributions require a finite series of equal probablility. Measurement uncertainty and even measurement error does not follow this requirement. Quit googling for fancy terms without understanding the assumptions that are necessary for them to fit.
The value a real measurement can take is any of an infinite quantity of numbers within the uncertainty interval and the true value is something you can not know and never will know. With two data points, one could be at the high end of the interval and the next at the low end of its interval or any other combination. Adding more data points only increases the combinations possible. And note, each measurement can have different uncertainty intervals. That is why uncertainty is propagated, not canceled.
Good grief man, I’ve spent my whole life dealing with measurements and how their uncertainty can affect what you are working on. From dealing with rod and main bearing clearances in engines to piston cylinder roundness. I’ve designed and built small signal High Frequency amplifiers that had very specific linearity and intermodulation requirements. Measurements are an everyday task for working engineers. When you are subject to litigation if you screw up, you better cross your t’s and dot your i’s. Tell me a climate scientist that has lost their job because their predictions didn’t come true. Now think about the design engineer and architect that designed that apartment building in Florida that collapsed. Do you think they are going to be out of work?
“Dr. Frank didn’t need to quantify by how much.”
You clearly don’t know the meaning of “Representative Lower Limit.” Bob.
You fled our last conversation. Why’d you come back?
“You fled our last conversation. Why’d you come back?”
Actually, I forgot about “our last conversation”. Yes, I agree that it was thoughtless to leave you waiting, with baited breath. Especially given that your only tiny following is here. I.e., I admit that I only do this for fun, and that I don’t worry that you have any real impact. Maybe I should follow my mom’s advice and quit teasing…
Now, if you actually follow our exchanges, you will see that I walked off the field w.r.t. the fantastic error spread from your 2010 paper. All I aksed was that you actually evaluate how that spread increased the standard error of any resulting trend, and changed it qualitatively. Merely arm waving that it is YUGE, don’t get it If I missed you doing so in that paper, would you please point that out to me?
You fled the conversation about my model error paper when I crashed and burned your hero argument.
Here, you didn’t even check the link before posting in reply.
But I’m grateful for that. Your reply demonstrates blowhard incompetence — which you’ve memorialized here for as long as WUWT persists.
Concerning my 2010 Uncertainty in the Global Average Air Surface Air Temperature Index: a representative lower limit, you’ve never failed to get it wrong. There’s little hope that will ever change.
STILL no easily executed evaluation of the uncertainties that you quantify.
Folks, the basic problem is that these elusive “uncertainties” are not only unjustified, but that they hold magic properties that render them impervious to normal statistical treatments. So, the resulting mutual arm waving amongst the tiny coterie here.
Thanks to the Imaginary Guy In The Sky that it goes nowhere else.
Yet another demonstration of your refractory ignorance bob. You don’t understand calibration. You don’t understand systematic error.
And you refuse to understand either.
I’ve explained them for you repeatedly.
Tim and Jim Gorman have endlessly explained both to you. They’ve gone down to kindergarten level descriptions to try and convey the message to you.
And you come back as ignorantly argumentative as ever. Here yet once again.
You claim to be a working geologist. But you behave like a high school japester.
You’ve clearly dedicated yourself to willful public ignorance in maintenance of a destructive pseudoscience.
On your head be culpability for the excess Winter fuel poverty deaths and the families made desperate because employment has been wrecked. You argue for those crimes. Own them.
You just deftly indicated your lack of understanding to everyone who knows some metrology. So there is no misunderstanding I will only show references.
From Dr. J. R. Taylor’s book on uncertainty.
BOB,
It was not the fantastic ‘error’ spread, because Pat showed an uncertainty spread. Not the same animal. Geoff S
How convenient. A magical parameter impervious to any kind of actual statistical evaluation. But if I’m FOS, then feel free to guide me thru that valuation.
It’s simple, bob. Uncertainty is not error.
The point has been explained for you ad nauseam and every which way. T&J Gorman and Carlo Monte have been heroes in that effort.
Evidently their efforts have always ricocheted away.
I don’t know what you think is “magical” about it. I showed definitions from a textbook written by Dr. Taylor and statements from an internationally accepted manual on uncertainty.
I appreciate Dr. Frank mentioning me along with my brother and Carlo Monte. Others have participated in this discussion like Clyde Spencer and Geoff Sherrington.
That is two PhD’s and several folks with engineering backgrounds. Don’t you find it funny that everyone agrees that errors and uncertainty are two different things? Most people would say that “maybe these folks know something I don’t” and try to understand instead of simply calling it magic.
Failure to understand a global average temperature has little relevance is even more common
I thought temperatures were measured. I guess we need scientists like you to correct those misconceptions.
I caught that too. After you.
Temperatures are not calculated, THEY ARE MEASURED. Uncertainty is ALWAYS part of a measurement. The only calculation that can reduce random errors, but not uncertainty, are multiple measurements of the same thing with the same device which are then averaged.
You betray your background in mathematics where numbers are always 100% accurate with no error and no uncertainty and where statistics never need to have uncertainty included. Scientists should never fall into this trap but obviously do, especially in climate science. Tradesmen like machinists and mechanics or engineers designing things must always include both error and uncertainty in their daily tasks. Scientists should also do this with every experimental measurement.
Not to mention that all measuring instruments are calibrated with standards that have traceable error as well. That’s why standards labs exist. Round robins between lab standards help keep this error low, but there are always outliers in all round robin comparisons.
But in my experience, that never eliminates the misuse of the calibrated instruments in any case, which can add considerable error. And the misuse result is never known and never considered in any published error analysis.
Very well then, I guess I need to toss out my thermometer, seal my office & keep a constant molar volume, back calculate my specific energy (before I throw out my thermometer), and get a very accurate barometer.
Or I could just keep up with the measuring, rather the calculating.
What is your point? Temperatures are usually measured for raw data. Calculations are not the same thing unless you are using something like a calibrated thermocouple and converting a proxy electrical measurement to an equivalent temperature.
Temperatures are “calculated”?
I thought they were recorded.
“temperatures are calculated” … Fraudian slip 🙂
If you want to go further perhaps use your english lit degree.
I look forward to alarmist media releases claiming that one day next summer somewhere will be reported as –
“the hottest day ever
recordedcalculated“You could record all the daily average temperatures from any location with 1 degree C accuracy and still end up with a number with 4 or more decimal places as the monthly average.
And it would be as meaningless as calculating it to 100 decimal places. Do you understand the concept of “significant figures” in maths and engineering?
Exactly! What rule do you follow when your calculation is an irrational number? Graphs always crack me up when they are labeled with temperature anomalies of 0.1, 0.2, etc. They should be labeled with the same number of digits as what the data points are, i.e., 0.1000, 0.2000. Guess what? They don’t want to draw attention to numbers like this because many fewer would question the values.
Nick Stokes made the same argument. TheFinalNail’s argument assumes perfect instrumental accuracy and infinite precision.
It’s the argument of people who know nothing of resolution.
Reading TheFinalNail’s comment, I realized that people may not be familiar with the concept of how instrument resolution limits measurements, and how sometimes this is not improved by increasing the sample size.
Here’s an example. I have a mechanical pencil. The thickness of the lead is 0.7 mm. Think about trying to measure the thickness of this pencil lead using an ordinary ruler marked in millimeters. I look at it and I say “about 0.8 mm”. OK, one measurement. Can we improve on that?
Sure we can. I ask you, and you hold the ruler up to the lead and say “0.7″, and someone else says “three-quarters of an mm”, and so on. And for the first few additional measurements, our accuracy does improve. But soon a limit is reached.
We will never, for example, be able to detect the difference between two pencil leads that differ by a few thousandths of a millimeter, by using only an ordinary ruler in the normal way. No matter how many measurements we take. We can ask a million people, and have a statistical error of … just a sec … call the standard deviation (SD) of the readings say 0.1. Standard error of the mean = SD/sqrt(n) … so for a million measurements of the pencil lead, Standard Error of the Mean (SEM) ~ 0.0001 mm.
Doesn’t matter if the SEM is a ten-thousandth of an mm. The limit is not in the sample size. The limit is in the resolution of the instrument itself. We still can’t tell the difference between the two pencil leads using a ruler.
The same is true with the usual weather thermometers, which record to the nearest degree. Yes, averaging will give us better answers, but there is a limit to that process, just as with measuring a pencil lead with a ruler.
My real-world rule of thumb is that we can’t gain any more than one decimal point by repeated measurements. So if the thermometer has gradations every 1°, the average won’t ever be better than a tenth of a degree.
Regards to all,
w.
And, as one attempts to refine the precision of the measurement, they run into problems with variations in the thickness along the length, and roughness of the surface. So, one calculates a number that doesn’t represent reality. One cannot realistically represent a real-world object with a single number.
Again, those who advocate taking large numbers of readings to improve precision, overlook the essential requirement that to eliminate random errors one has to use the same measuring instrument, on the same object, under conditions of constant temperature and humidity. One can do that with a pencil lead, but not with an ambient outdoors temperature that is always changing.
Not even at the same instrument at different times. Humidity, wind, angle of the sun in different seasons, mowed grass all vary to such an extent that a simple average beyond the recorded value is simply not valid.
Most of what you say here is right on target. However, I disagree with your last statement.
Here is the deal, each measurement conveys a given amount of information for each measurement. Performing a statistical calculation like average can not increase the amount of information that you have available. This is what Significant Digit rules are based upon. Significant Digit rules do allow the addition of 1 more decimal if the number is going to be used for subsequent calculations so that rounding errors do not accumulate. However, the “final answer” must be rounded to the precision of the original measurements.
There are numerous lab references on the internet that describe not using more decimal places that what was measured when averaging measurements.
Jim Gorman April 22, 2022 1:02 pm
Thanks, Jim. Let’s say we’re trying to measure something that is 12.5 mm using a ruler marked in mm. We ask people to measure it to the nearest mm.
Somewhere around half of the people will say “12 mm” and half will say “13 mm” … now, which is more accurate?
• Either 12 mm or 13 mm, which is what you’d get following the rules about significant digits, or
• The average of the measurements, which will be on the order of 12.5 mm
I say the latter, despite it breaking the rules about significant digits … which is the origin of my rule of thumb. I do NOT think we can say “12.57” or the like, but I do think we can gain 1 decimal.
Interesting discussion, thanks.
w.
I use Dr. J. R. Taylor’s book An Introduction to Error Analysis; The Study of Uncertainties in Physical Measurements as my guide. I have attached a page from his book. As you can see, he recommends a ±1 for integer values. By the way, this is similar to what the National Weather Service quotes for uncertainty with LIG thermometers.
Thanks, Jim. I’m well aware since high school of the rules of significant figures.
However, I fear you didn’t answer my question. Which is more accurate?
w.
And I would use 12 ± 1 degree. Why? I also follow the rounding rule of rounding a “5” to the nearest even number. This isn’t usually taught, but it makes sense to me to better balance out what happens when a value ends up exactly in the middle of integer values.
Here a url that explains why.
ChemTeam: Rounding Off
You still haven’t answered my question.
Also, your answer means the true value could be anywhere from 11 to 13 … but not one observer gave an answer less than 12. So that makes absolutely no sense at all.
Your rounding rules are called “banker’s rounding”. Nothing novel there.
w.
If you truly believe this then read up on Significant Digit rules and explain them away.
I refer you to the following:
From: http://www.chemistry.wustl.edu/~coursedev/Online%20tutorials/SigFigs.htm
You should also read this.
From: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2959222/#
He won’t read it and if he does he won’t understand it.
He’s just like the “engineers” who point at calibration labels on their instruments when audited, but never read the calibration reports to understand what they are pointing at.
Jim, I have NO CLUE who you are responding to, or who Doonman thinks you are responding to. Clyde? Me? Pat? Crisp?
This is why I request that people quote what they’re talking about.
Thanks,
w.
To Willis Eschenbach,
(right up there , just under the posters name, and indented a little to the right, there is a standard configuration that says ‘Reply to’. Just after where is says ‘Reply to’, is the name of the recipient of the Reply. In the case of your apparent confusion, it was ‘The Final Nail’)
“The SEM is correctly used only to indicate the precision of estimated mean of population.”
Exactly. This is what we are discussing. Not the variation of individual data points from their mean, or from their trend.
You continually misread what is meant.
The SEM only tells you “HOW CLOSE” you are to the true mean of the population. As you increase the sample size and/or the number of samples, the smaller the SEM gets, indicating that you are honing in closer and closer to the value of the population mean.
As N (sample size) goes to ∞ the lim 1/N -> 0. What does this mean? It means the sample means distribution is 0 wide, and the mean of the sample mean IS also the population mean.
Remember this is an interval within which the mean will lay. It has nothing to do with the value or precision of the mean! That is determined by the Significant Digit Rules and the resolution of the measurements. For example, it is entirely possible to have a mean of the sample means of 18 ± 0.0001, i.e., the SEM is 0.0001. that does not mean the sample mean is 18.0001. The mean is 18, just plain 18!
The deflection to imbedded error is just that. If you know it, then you correct for it. If you don’t know it, you try and find it. But ITMT you don’t just proclaim, with no factual backup that it must be YUGE, because [fill in the blank] and therefore the evaluation is worth free. That’s just a whiny, convenient, cop out.
Trending takes this silliness to an even higher level. To have SEM materially influence the trends under discussion, they would have to be:
“The mean is 18, just plain 18!”
No, it’s not. Without consideration of it’s standard deviation, it would be 18.0000+/-0.0001. I am not sure, but with consideration of both the SEM (not correcting for it) and SD, I think that you would calculate (SD^2 +SEM^2)^.5, find the first non zero digit on the rhs, and extend the sig figs for the expected value out that far.
Another possibility. Since SEM is defined here as a vertical asymptote, either up or down, as the number of samples goes to infinity, then the calculation might simply then be for for SD+SEM. Yes, you don’t add standard deviations, but SEM, as defined here, is a single point.
I have not found a reference to how to handle this, but it is certainly a situation easily dealt with using basic statistical rules that I admit to having forgotten.
The SEM is the Standard Deviation of the distribution of sample means. This where the term Standard Error originates. It becomes applicable in how you define a database of station data. If you declare the data a population, there is no reason to find the SEM. You have the data to find both the population Mean and Standard Deviation.
If you declare the database a group of samples, then you need to investigate the distribution of the random variables to see if you have a normal distribution. The CLT says you will (should have) a normal distribution if the sample size is large enough and there are sufficient samples. If the distribution is normal then find the standard deviation of the sample distribution. That IS THE SEM. The only way to make the SEM smaller is larger sample size or to increase the number of samples.
Look at the attached screen shot. Take the SD (i.e., the SEM) of each of the sample groups and multiply it by the sample size. It will equal the SD(population). You will also notice how the sample distribution gets more narrow as you use a larger sample and more samples.
The equation is
SD(pop) = SEM * N
N = Sample Size
The URL is:. https://onlinestatbook.com/stat_sim/sampling_dist/
This URL also has good info:. https://www.scribbr.com/statistics/standard-error/
I meant to address this and forgot.
The SEM is an INTERVAL, it is not an asymptote. It is a Standard Deviation interval. If anything, the asymptote is the SAMPLE MEAN.
The other thing to remember is that N is the sample size, not the number of measurements. You can’t reach ∞, you can only go as large as the population, so you are no longer sampling.
[O/T: Request: I seek advice on any links between rates of burning fuels in ICE motor vehicles and [estimated] movements in atmospheric Co2 levels. Purpose: If ICE emissions are claimed to increase [“dangerous/harmful”] atmospheric Co2, then Big Business, Big Government etc pro-Co2 abatement policies would tend to contradict their current post-Covid restrictions policies to get their workers to normatively travel into CBDs, ie to reverse the Corporate “Work from Home” policies. “Thank you” for any leads.]
We’ve all looked them up previously, and it’s a good exercise. Search “Global Carbon Cycle”. Print out a couple of the charts for contemplation. Remember that CO2=44 molecular wt, and C=12 for a ratio of 3.7….cuz charts will be either “CO2” or “Carbon”…there are 3210 gigatons of CO2 in our 400 ppm CO2 atmosphere, so 4 ppm is 32 gigatons, and human emissions happen to be about 32 gigatons of CO2 per year (maybe as high as 37).
So every year humans emit about twice as much CO2 as the Keeling curve increases by (2ppm)….on the other hand, just one of the “natural CO2 sources”, microbial decomposition and respiration in the top foot of soil, account for 240 gigatons of CO2 emissions, 15 times that of humanity.
The possibility of the planet achieving a carbon cycle balance at some higher level than the current 410 ppm due to global greening and increased soil microbial activity, with some constant level of human emission would be considered blasphemy…and the IPCC would set up a burn pit for the warlock that suggested such a thing.
Oh, and a pentagram is the same as a gigaton…or is that a witchcraft thing ?
One way to regain control of the narrative and the science is by replacing the far Left alarmists with normal people. For those of you in Australia, I offer this suggestion.
I manage a laboratory an ISO17025 lab and I agree with the general point of this article.
But putting all that aside; if CO2 was reduced by 0.2ppm wtf difference would that make*? And in the context of the harm caused by 2 years of lockdowns would it be worth it?
*if you believed CO2 mattered particularly
It should be clear enough that the call for ‘climate lockdowns’ from various quarters is one of two things, depending on who is making the call
The demand of the (religious/ideological) fanatic who wants to force everyone to accept his beliefs
The demand of those who manipulate the fanatic beliefs for political gains.
Yep of course its political and dogmatic, maybe add that some just like to use the power
But how can anyone get excited about a .05% change either way? Most of the claims are something like “doubling of CO2 causes this (piffling) rise in temperature”. So not only is the change in CO2 undetectable the potential change in temperature would also be minuscule and undetectable.
Andic: I don’t think the issue is 0.2ppm so much as the possibility that the observed changes in CO2 are almost entirely natural. Were it true that human emissions have little or no effect, it’s going to be really hard for “net zero” to save the planet. (Whether the planet is actually in need of salvation is a separate issue).
Exactly. Net Zero is not good policy even if you believe the most alarmist projections of the IPCC.
We cannot make meaningful cuts without a cost that’s never going to be acceptable to Anybody.
And yet, to make Net Zero work, we need to have that cost paid by Everybody.
The only way they can attribute the increase in atmospheric CO2 to people is to pretend that all the terms in the budget are accurate within 1% and never change, and that the residence time of CO2 is more than forty years. All that is risible.
If you change the basic assumptions even a little, it becomes obvious that the increase in atmospheric CO2 is mostly natural. link
If you cherry pick the basic assumptions, you can get any outcome you want.
For example, the linked paper makes the following assumption:
That’s a much more reasonable than assuming that the concentration of atmospheric CO2 has no effect on the uptake by sinks, but that’s what’s necessary if you want to prove that the increase in atmospheric CO2 is due to human causes. The human contribution is, after all, a small proportion of the total flux.
Suppose you have a tank with capacity of 1 million gallons and there is water flowing into and out of the tank at the rate of 1000 gallons per minute each. The level is in the tank will not change. Now, if I add another inlet stream of 1 gallon per minute, the tank will begin to fill. In the very short term, the change in the level will be imperceptible and probably less than the accuracy of the tank gauge, but over time, the change in level will be easily measured. So it is with the effect of fossil CO2 in the atmosphere.
You need to remember that the flows in and out are not constant. And they may well fluctuate by significantly more than the extra 1 gallon per minute.
Now how long do you need to wait to make it detectable?
It depends on how large the natural fluctuations in CO2 flux are. As it happens, they are much larger than any short-term changes in the anthropogenic flux. Even so, the persistence of the anthropogenic flux over a long time can have a measurable effect, which is what we see in the Keeling curve, or at least that is what I think.
Your analogy with a water tank is flawed because as the CO2 partial pressure increases, the natural sinks will take up more CO2. Furthermore, the increased CO2 will result in more photosynthetic plankton and vegetation, which will result in increased sequestration of CO2. At the same time, an increase in atmospheric anthropogenic CO2 partial pressure will inhibit outgassing from the oceans, and inhibit soil respiration. That is to say, there are active feedback loops that your simple analogy doesn’t take into consideration.
In the case of your water tank, if the exit hole is at the bottom of the tank, the increase in the head (pressure) with the increased inflow will cause the outflow to increase as a result of increased pressure and velocity. The outflow will not stay constant as you suggest.
Honestly, the analogy wasn’t meant as a model of the atmosphere. If it makes it any clearer to you, envision that the inlet and outlet are hooked up to the suction and discharge of the same pump.
If it makes it any clearer, if the suction and discharge are from the same pump, then the two are equal and it is an unrealistic situation equivalent to no net change.
Aren’t you making a big assumption that the inflow and outflow of CO2 are exactly the same on a short term basis? Do you really think that the Earth balances out CO2 on a daily or even yearly basis? The rate of change in the use of fossil fuels has not exactly followed the rate of change in the Keeling Curve, so why do you think the entire Keeling Curve is anthropogenic CO2? What caused rises and drops in CO2 in the past?
If you are able to follow the logic of my analogy, you’d understand why changes in short term fossil CO2 emissions do not show up in the atmospheric CO2 concentration, but over the long term, they do.
The problem is that your analogy is a poor analogy. Every system is simple if you assume away all the complexity.
The point of the analogy is to show how a short term effect can be difficult to detect while it’s cumulative effect over time is easily measured. The entire point is to refute the idea that because we can’t measure a change it CO2 as a result of short term changes in fossil CO2 emissions, then the atmospheric build up of CO2 must not be due to fossil CO2 emissions.
My post is an analogy where I get to specify conditions. The argument has been made that since short term fluctuations in fossil CO2 emissions cannot be detected in the atmospheric CO2 concentration that perhaps fossil/anthropogenic CO2 emissions aren’t responsible for the long term build up of CO2 in the atmosphere. I’m not sure if the author, in a round about way, was trying to suggest this or not, but undoubtedly some will draw that conclusion. My analogy is to demonstrate that just because you cannot measure a short term change, it does not follow that a small change in the flow(flux) will not have a measurable impact over time.
That’s a good analogy as far as it goes but it’s not the whole story. As the tank fills, the hydraulic head increases (as the height of the water increases the pressure on the outlet goes up) so the rate that water flows out will also increase. The tank will eventually reach a steady height when the outflow increases to 1001 gpm.
It’s the same with atmospheric CO2. The rate of removal by natural processes increases as the concentration gets higher. As the atmosphere gets further from equilibrium the chemistry for CO2 removal, especially by the ocean, is driven to remove CO2 faster. About 1/2 of all CO2 added by humans is being removed by natural processes now. Some alarmists claim that the removal processes will saturate but there’s no reason to believe that.
It’s much more likely that the rate of increase of CO2 concentration will slow down and it will eventually stop increasing. We’ll almost certainly run out of fossil fuels before it can reach 800 ppm. Some estimates for the peak concentration are quite a bit lower than that.
It’s true that the change in level (concentration) could shift the flow (flux), but that was not the point of the analogy. It’s possible that there could be some naturally occurring shift in the CO2 flux, but what is the explanation for that? You also have to consider the isotopic fingerprint of fossil carbon in the atmospheric CO2 and how it is changing over time. Is there also an alternative explanation for that?
CO2 isotopes in the deep ocean are far from simple. link
There goes your fingerprint calculation.
I notice the study is based on modeling; I guess that nailed it down for you?
Yes. The analyses are essentially cherry picking because they don’t take into account isotopic fractionation as CO2 moves across the water/air boundary. Furthermore, there is reason to believe that the biogenic CO2/CH4 emissions from the tundra are not adequately accounted for. They are going to be enriched in 12C just as fossil fuels are.
You might want to take a look at this:
DOI: 10.1038/s41467-022-29391-5
Does the isotopic fractionation at the air-water boundary cause the amount of C14 in the atmosphere to increase or decrease?
Neither. You are comparing apples and oranges.
“It’s possible that there could be some naturally occurring shift in the CO2 flux, but what is the explanation for that?”
Chemistry 101. It’s what you expect as systems move further from equilibrium – the rate of reactions trying to restore equilibrium gets faster.
If you don’t know that you shouldn’t be commenting.
You’re making a linear assumption. You have ignored the quote I supplied above.
Fair enough, because I think you’ve ignored or failed to comprehend what I said.
Hey Tom.1,
Do we assume that the Q(out) is pressure related and is dependent on the elevation (potable water storage); or do we assume that there is some type of a flume or broad crested weir; or do we assume that there is both pressure flow & weir flow from an open system?
Regardless, your analogy, if a typical potable water storage, is flawed from the start because the velocity in the (approximate, based on your initial conditions) 3″ outfall pipe (2.23 cfs at 25′ head approximately) is already so turbulent that any perturbation at all will blow the resistance out of the water (so to speak). An increase in Q(in) screws things up because the system is already maxed out.
If your reservoir is 4.25′ deep, and your other initial conditions hold, the end static condition is a depth of 4.258 feet of head. No perceptible change in depth … nobody knows or cares.
You need to know what variables are applicable and what variables can be discounted. You don’t.
With the earth as a system, it is obvious that the system capacity is not already maxed out. CO2 levels have been much higher in the past. If you are concerned about the affect of fossil CO2 in the atmosphere you should be looking at an analogy that uses an hourglass as a reservoir, and that the current water level is just above the small area restriction of the glass … the depth doesn’t increase as fast as the volume (still a crappy analogy, but better than yours).
This is not a real world tank. It is obvious that I failed to communicate the purpose of the analogy, but when people are determined to not understand something, there is not much one can to.
I understand what you are saying. It is wrong. The earth is not analogous to the reservoir you want.
Using a reservoir as analogy, depending on the configuration of the reservoir, the fluid elevation will increase a little, a lot, or none at all, based on an increase of input.
And it would take a lot of design jiggering to make the increase linear wrt time.
The description of the calibration procedure refers to a Standard Deviation of 0.093 on a reference gas at 380 ppmv. So that results in a 95% confidence uncertainty of +/- 0.186 (0.2 rounded). But this does not include the uncertainty of the reference gas mixture. That would typically be something like 0.2% of the stated value for a quality primary reference cal gas. That’s about +/- 0.76 ppmv. There are certainly other sources of uncertainty related to the actual measurements including such thing as temperature of the instrument, dryness of the sample, interferences from other gases, etc. One of the things you learn in doing rigorous calibration uncertainty budgets is that measurements are never a accurate as you would think.
Ric C,
Agreed. It would be nice if more scientists understood error and uncertainty and presented estimates that were useful rather than idealistic guesses. Geoff S
While much of this seems solid, the comparison between Mauna Loa and the other sites in other parts of the world isn’t correct. It’s not a variation (with a claimed associated standard deviation of 5.5 ppmv) in measurements as he claims. It’s due to the fact that the CO2 varies by latitude. Here’s the carpet diagram of the changes with time and location.
Given that, there is no reason to think we’d get the same numbers at the different sites. So his figure (unnumbered, titled “Monthly Difference from Mauna Loa” does not mean what he claims it means.
The author makes the following claim:
Without the 5.5 ppmv value, the range of the 2 sigma values goes from 0.1 to 0.9 ppmv.
Next, the author makes a variety of claims about the difference between Scripps and NOAA Mauna Loa data. Unfortunately, following the bad example of climate alarmists, the author has NOT provided a link to the data underlying those claims. So I fear it is impossible to replicate his claims, so they are only suitable for the Journal of Irreproducible Results.
Finally, the claim is that there was an annual reduction in CO2, not a weekly reduction … and that wouldn’t necessarily be affected by any of the uncertainties mentioned by the author. Given the daily and weekly data shown in the graphs, the variation is symmetrical. This means it’s subject to the law of large numbers … so on average over months this shouldn’t be a problem.
Best to all,
w.
Hi Willis,
I guess that I was too brief with the comment I made 2 paras before the graph to which you refer.
“This would give (more or less) the ultimate, practical uncertainty – but there would be justified dissent from those who claim to know why there are differences between sites like Point Barrow Alaska, Alert Canada, LaJolla California, American Samoa, Cape Grim Tasmania and the South Pole – all of which have high quality existing analyses for CO2.”
In essence, I agree with you. I was using the concept of a spectrum of ways to look at uncertainty, from the narrowest to the widest and this was the widest – with qualifications.
Data sources are all easy web searches. Sources were referenced in the essay I wrote a year ago for WUWT, as referenced.
Cheers Geoff S
Willis: A truly gorgeous chart. Thanks for posting it. Given you expertise in data analysis, There’s a chart of daily and hourly CO2 measurements at https:://gml.noaa.gov/ccgg/trends/monthly.html that might interest you. I won’t try to link to it here as I generally botch the html and this cheap chromebook isn’t much good for anything other than typing text. My first impression of the noaa chart is that different airmasses have significantly different CO2 concentrations — 2-3 ppm worth. If so, that has to complicate analysis. And that there may be some diurnal affects in the noaa hourlys. Note that the daily averages for Apr 10 and 11? are missing even though the hourly points seem to be there. Not clear why.
Willis,
What an interesting graph ! Shows much reduced seasonal CO2 variation in the southern hemisphere, presumably due to land area vegetation. I think it implies CO2 uptake by ocean biota is less than we think, plants and soil bacteria…more….
To me, it implies that CO2 uptake by ocean biota is stabler than land biota, but given lower CO2 concentrations in the southern hemisphere, overall it seems to be larger …
w.
An alternative hypothesis is that the high northern latitude concentrations are being driven by soil outgassing wherein the bacteria are temperature sensitive, tending to shut down during the coldest part of the Winter. Also, frozen permafrost may impede the soil respiration.
The graph title says global but the text says “marine boundary layer” which doesn’t sound entirely global.
Something that I find interesting about this carpet diagram is that over the decade of measurements, the seasonal amplitude at the South Pole seems to have changed little. However, the high latitude amplitudes have changed considerably, particularly since about 2010. There is a significant jump about 2014. In the most current version of this colorful rendition, it appears that the jump is actually in 2016, corresponding to the last significant El Nino.
My interpretation of this graphic is that the increasing high-latitude warming is driving both an increase in average CO2 concentrations and the amplitude of the seasonal changes. The high-latitudes also seem to be more sensitive to global warming events than south of the Equator.
I’m not sure that these carpet diagrams have been colored properly. Shouldn’t the yellow band be parallel to the date axis?
The same experts that can ‘see’ (in the historical record) the impact on CO2 levels of the 70’s oil crisis CANNOT ‘see’ any impact on CO2 levels due to the Pandemic.
It is Rorschach inkblot tests all the way down…
The decided lack of Co2 in our atmosphere holds back our production of food. I has zero effect on global warming that is the product of planetary alignment and old Sols moods.
The earth could do with double or triple our Co2. For greenies and Agw alarmists please
visit a local green house food grower and find out the truth.
My personal observation is that the human race is, indeed, causing the increase in atmospheric CO2, and it is providing us with substantial benefits as you note.
Some 65 million years ago a rather large meteor hit the earth, and the resulting devastation buried a substantial amount of carbon due to killed and buried forests. A study of “The Age of Coal” dated many deposits back to that event. Prior to that, the earth was capable of supporting giant dinosaurs. Other deposits have been dated far further back, to currently unknown catastrophic events.
This global warming fear-mongering is a bunch of pseudo-scientific garbage. We are actually headed for the next BIG ice age, caused by the biggest of the Milankovitch cycles.
Precision and accuracy are irrelevant to what are now wholly political discussions.
Precision and accuracy are supposed to filter what has enough scientific quality to allow it to be sent to the politicians. My gripe is that they are not being properly calculated – and many still get through to pollies despite that. Geoff S
Geoff,
I greatly appreciate the article, and thank you for that, but must cynically conclude that science has become irrelevant to the topic. Two words: Greta Thunberg. This is where politicians seek guidance on climate.
I am not sure how useful it would be to be able to detect short term changes in atmospheric CO2 based on short term changes in fossil CO2 emissions. Any change would be small compared to natural changes and in the very short term, CO2 is not going to be “well mixed” globally. There are other ways to measure the cumulative contribution of fossil CO2 on total atmospheric CO2.
CO2 is well mixed just not perfectly mixed, the mixing process takes about 3 years, i.e. Mauna Loa registered about 400ppm about 3 years before Antarctica.
Phil,
Yes, the numbers suggest a mixing page between MLO and South Pole.
Maybe there will one day be a blip at South Pole in their CO2, which could be less noisy,
Geoff S
The importance of the question is whether Draconian reductions in anthropogenic emissions are justified by the poorly supported claim that reductions in anthropomorphic emissions will result in significant reductions in atmospheric concentrations of CO2. Without quantitative measurements to support the claim, we risk turning our economy on its head based on what has to be a belief that the self-appointed experts really understand the problem and its solution.
Personally, I think that emissions from the tundra are probably of greater importance than anthropogenic emissions.
I think most people here believe or understand that public policy on global warming/climate change has gotten way ahead of the science. That’s my view anyway. But wherever unscientifically supportable positions on anything crop up, I will object, whether it supports of my views on the matter or not.
I’m happy to accept the Mauna Loa figures as the best we can get for CO2 concentrations, and believe the year to year variance it supports is not so large that it will be impossible to detect changes in the anthropogenic signal, although my reading of Roy Spencer is that he thinks otherwise.
My observation on this (posted previously without reaction) concerns the narrative that we are expected to accept about the origins of the increase in atmospheric CO2. This lays the figures down on the basis of a calculation of mass balance, derived from estimation of sources and sinks. It is commonly stated that the natural emissions and natural sinks are, for the purpose of the calculation, regarded as constant, and therefore short term changes in the human sources could be detected, if of sufficient magnitude. Although, as a biologist, I have some reservations about such heavy reliance on mass balance arguments, let us for the present accept it.
It is further stated that a bit more than half of the anthro emissions are taken up by the (unchanging) sinks, the excess remaining in the atmosphere. So, what we see is the TOP SLICE effect of the emissions, and any reduction of emissions should be accordingly more visible by a factor of at least two. If we can’t see that in the MLO figures, I submit something may be wrong with the assumptions that we make. Geoff Sherrington’s scepticism re the uncertainties in the figures seems to allow both sides of the argument more room for manoeuvre, though!
contrail free skies and smog free cities were rather obvious bonuses of covid
The satellites show CO2 is not a well mixed gas … full stop … any single site measurement is useless for a global average …
Pleas define what you accept as being “well mixed.”
The same percentage of the air at any location throughout the atmosphere measured at the same point in time. What the word says: If you mix several substances well, the result is a mixture that has the same percentage of each ingredient throughout its total mass, no matter where you take the sample and how large that sample is.
Sorry, not a valid definition. There is ALWAYS some variation. The question is, how much variation do you accept and still call it “well-mixed”. That’s different for every situation.
w.
Willis,
As I mentioned in the first essay, a common measure of uncertainty can come from a scenario where an unknown person delivers a sample to a lab and asks for it to be analyzed for a defined substance. Mineral exploration and mining labs have this all the time. Like “Here is some soil, please analyze for ppm Copper”.
When the lab is given aliquots of the same, well-mixed sample time after time, a measure of precision can be obtained. When the submitted sample is from a standardized sample (already analyzed by many labs and again well-mixed) one obtains a measure of accuracy.
In the specific case of CO2 in air, neither procedure seems to be used. There seems to be no “unknown person” with flask in hand. There might be, but after a lot of looking at the matter, I have not found any mention.
Maybe I should float the question, “Can an analysis of CO2 in the air be used to determine the date when it was captured”? The rider is, “with what accuracy in years?”
BTW, in thinking of you response to my essay above, you wrote “It’s not a variation (with a claimed associated standard deviation of 5.5 ppmv) in measurements as he claims.” I disagree. If you crunched the numbers you would get a standard deviation of that size. It is a mathematical variation. I am fully aware that the 5.5 ppm variation contains a large element of a natural variation associated with factors like latitude, and I am fully aware that this would not normally be included in a discussion of lab performance, because the lab cannot control it. It was merely setting a plausible upper limit of a spectrum of estimates of variance. There is room for other, controllable variances between that 5.5 ppm and the next lowest one I mentioned, but I did not start into what they might be. The essay was already too long, in order to be inarguably explicit. Geoff S.
CO2 is not a climate driver. Man, what is wrong with people?
yes, that’s unfortunate… as Roy points out, the detectability of a falloff in emissions should at least bound the effect of human emissions on CO2
sadly it seems that (as with energy balance) the errors are on par with the effects
Since it is clear that CO2, from whatever source, has a negligible effect on global temperatures, why do we bother trying to measure CO2 to better than, say, 2 PPM ? Recall CO2 is up near 50% in the past 100 years while temps. are +/- flat, at best ? Brrrr.
With all the other real and disastrous problems in the world, measuring CO2 levels to any level of precision is like the band members on the Titanic tuning up.
“Authors have to cease and desist from cherry picking, concealment of adverse data, misrepresentation of uncertainty and reluctance to respond to criticisms of their work.” In short they should behave like the scientists they’re being paid to be not a bunch of schoolchildren.