Opinion by Dr. Tim Ball
I have no data yet. It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. – Arthur Conan Doyle. (Sherlock Holmes)
Create The Facts You Want.
In a comment about the WUWT article “The Record of recent Man-made CO2 emissions: 1965-2013”, Pamela Gray, graphically but pointedly, summarized the situation.
When will we finally truly do the math? The anthropogenic only portion of atmospheric CO2, let alone China’s portion, does not have the cojones necessary to make one single bit of “weather” do a damn thing different. Take out just the anthropogenic CO2 and rerun the past 30 years of weather. The exact same weather pattern variations would have occurred. Or maybe because of the random nature of weather we would have had it worse. Or it could have been much better. Now do something really ridiculous and take out just China’s portion. I know, the post isn’t meant to paint China as the bad guy. But. Really? Really? All this for something so tiny you can’t find it? Not even in a child’s balloon?
The only quibble I have is that the amount illustrates the futility of the claims, as Gray notes, but the Intergovernmental Panel on Climate Change (IPCC) and Environmental Protection Agency (EPA) are focused on trends and attribution. It must have a human cause and be steadily increasing, or, as they prefer – getting worse.
Narrowing the Focus
It’s necessary to revisit criticisms of CO2 levels created by the IPCC over the last several years. Nowadays, a measure of the accuracy of the criticisms, are the vehemence of the personal attacks designed to divert from the science and evidence.
From its inception, the IPCC focused on human production of CO2. It began with the definition of climate change, provided by the UNFCCC, as only those caused by humans. The goal was to prove their hypothesis that increase of atmospheric CO2 would cause warming. This required evidence that the level increased from pre-Industrial times, and would increase each year because of human industrial activity. How long before they start reducing the rate of CO2 increase to make it fit the declining temperatures? They are running out of guesses, 30 at latest count, to explain the continued lack of temperature increase now at 17 years and 10 months.
The IPCC makes the bizarre claim that up until 1950 human addition of CO2 was a minor driver of global temperature. After that over 90 percent of temperature increase is due to human CO2.
Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic GHG concentrations.
The claim that a fractional increase in CO2 from human sources, which is naturally only 4 percent of all greenhouse gases, become the dominant factor in just a couple of years is incredulous. This claim comes from computer models, which are the only place in the world where a CO2 increase causes a temperature increase. It depends on human production and atmospheric levels increasing. It assumes temperature continues to increase, as all three of IPCC scenario projections imply.
Their frustration is they control the CO2 data, but after the University of Alabama at Huntsville (UAH) began satellite global temperature data, control of temperature data was curtailed. It didn’t stop them completely, as disclosures by McIntyre, Watts, Goddard, the New Zealand Climate Science Coalition among others, illustrated. They all showed adjustments designed to enhance and emphasize higher modern temperatures.
Now they’re confronted with T. H. Huxley’s challenge,
The Great Tragedy of Science – the slaying of a beautiful hypothesis by an ugly fact.
This article examines how the modern levels of atmospheric CO2 were determined and controlled to fit the hypothesis. They may fit a political agenda, but they don’t fit nature’s agenda.
New Deductive Method; Create the Facts to Fit the Theory
Farhad Manjoo asked in True Enough: Learning To Live In A Post-fact Society,
“Why has punditry lately overtaken news? Why do lies seem to linger so long in the cultural subconscious even after they’ve been thoroughly discredited? And why, when more people than ever before are documenting the truth with laptops and digital cameras, does fact-free spin and propaganda seem to work so well?”
Manjoo’s comments apply to society in general, but are enhanced about climate science because of differing public abilities with regard to scientific issues. A large majority is more easily deceived.
Manjoo argues that people create facts themselves or find someone to produce them. Creating data is the only option in climate science because, as the 1999 NRC Report found, there is virtually none. A response to February 3, 1999 US National Research Council (NRC) Report on Climate Data said,
“Deficiencies in the accuracy, quality and continuity of the records place serious limitations on the confidence that can be placed in the research results.
The situation is worse today. The number of stations used is dramatically reduced and records adjusted to lower historic temperature data, which increases the gradient of the record. Lack of data for the oceans was recently identified.
“Two of the world’s premiere ocean scientists from Harvard and MIT have addressed the data limitations that currently prevent the oceanographic community from resolving the differences among various estimates of changing ocean heat content.”
Oceans are critical to CO2 levels because of their large sink or source capacity.
Data necessary to create a viable determination of climate mechanisms and thereby climate change, is completely inadequate. This applies especially to the structure of climate models. There is no data for at least 80 percent of the grids covering the globe, so they guess; it’s called parameterization. The 2007 IPCC Report notes,
Due to the limited resolutions of the models, many of these processes are not resolved adequately by the model grid and must therefore be parameterized. The differences between parameterizations are an important reason why climate model results differ.
Variable results occur because of inadequate data at the most basic level and subjective choices by the people involved.
The IPCC Produce The Human Production Numbers
In the 2001, IPCC Report identified 6.5 GtC (gigatons of carbon) from human sources. The figure rose to 7.5 GtC in the 2007 report and by 2010 it was 9.5 GtC. Where did they get these numbers? The answer is the IPCC has them produced and then vet them. In the FAQ section they ask, “How does the IPCC produce its Inventory Guidelines?”
Utilizing IPCC procedures, nominated experts from around the world draft the reports that are then extensively reviewed twice before approval by the IPCC.
They were called Special Report on Emissions Scenarios (SRES) until the 2013 Report, when they became Representative Concentration Pathways (RCP). In March 2001, John Daly reports Richard Lindzen referring to the SRES and the entire IPCC process including SRES as follows,
In a recent interview with James Glassman, Dr. Lindzen said that the latest report of the UN-IPCC (that he helped author), “was very much a children’s exercise of what might possibly happen” prepared by a “peculiar group” with “no technical competence.”
William Kininmonth, author of the insightful book “Climate Change: A Natural Hazard”, was former head of Australia’s National Climate Centre and their delegate to the WMO Commission for Climatology. He wrote the following in an email on the ClimateSceptics group page.
I was at first confused to see the RCP concept emerge in AR5. I have come to the conclusion that RCP is no more than a sleight of hand to confuse readers and hide absurdities in the previous approach.
You will recall that the previous carbon emission scenarios were supposed to be based on solid economic models. However, this basis was challenged by reputable economists and the IPCC economic modelling was left rather ragged and a huge question mark hanging over it.
I sense the RCP approach is to bypass the fraught economic modelling: prescribed radiation forcing pathways are fed into the climate models to give future temperature rise—if the radiation forcing plateaus at 8.5W/m2 sometime after 2100 then the global temperature rise will be 3C. But what does 8.5 W/m2 mean? Previously it was suggested that a doubling of CO2 would give a radiation forcing of 3.7 W/m2. To reach a radiation forcing of 7.4 W/m2 would thus require a doubling again—4 times CO2 concentration. Thus to follow RCP8.5 it is necessary for the atmospheric CO2 concentration equivalent to exceed 1120ppm after 2100.
We are left questioning the realism of a RCP 8.5 scenario. Is there any likelihood of the atmospheric CO2 reaching about 1120 ppm by 2100? IPCC has raised a straw man scenario to give a ‘dangerous’ global temperature rise of about 3C early in the 22nd century knowing full well that such a concentration has an extremely low probability of being achieved. But, of course, this is not explained to the politicians and policymakers. They are told of the dangerous outcome if the RCP8.5 is followed without being told of the low probability of it occurring.
One absurdity is replaced by another! Or have I missed something fundamental?[1]
No, nothing is missed! However, in reality, it doesn’t matter whether it changes anything; it achieves the goal of increasing CO2 and its supposed impact of global warming. Underpinning of IPCC climate science and the economics depends on accurate data and knowledge of mechanisms and that is not available.
We know there was insufficient weather data on which to construct climate models and the situation deteriorated as they eliminated weather stations, ‘adjusted’ them and then cherry-picked data. We know knowledge of mechanisms is inadequate because the IPCC WGI Science Report says so.
Unfortunately, the total surface heat and water fluxes (see Supplementary Material, Figure S8.14) are not well observed.
or
For models to simulate accurately the seasonally varying pattern of precipitation, they must correctly simulate a number of processes (e.g., evapotranspiration, condensation, transport) that are difficult to evaluate at a global scale.
Two critical situations were central to control of atmospheric CO2 levels. We know Guy Stewart Callendar, A British steam engineer, cherry-picked the low readings from 90,000 19th century atmospheric CO2 measures. This not only established a low pre-industrial level, but also altered the trend of atmospheric levels. (Figure 1)
Figure 1 (After Jaworowski; Trend lines added)
Callendar’s work was influential in the Gore generated claims of human induced CO2 increases. However, the most influential paper in the climate community, especially at CRU and the IPCC, was Tom Wigley’s 1983 paper “The pre-industrial carbon dioxide level.” (Climatic Change. 5, 315-320). I held seminars in my graduate level climate course about its validity and selectivity to establish a pre-industrial base line.
I wrote an obituary on learning of Becks untimely death.
I was flattered when he asked me to review one of his early papers on the historic pattern of atmospheric CO2 and its relationship to global warming. I was struck by the precision, detail and perceptiveness of his work and urged its publication. I also warned him about the personal attacks and unscientific challenges he could expect. On 6 November 2009 he wrote to me, “In Germany the situation is comparable to the times of medieval inquisition.” Fortunately, he was not deterred. His friend Edgar Gartner explained Ernst’s contribution in his obituary. “Due to his immense specialized knowledge and his methodical severity Ernst very promptly noticed numerous inconsistencies in the statements of the Intergovernmental Penal on Climate Change IPCC. He considered the warming of the earth’s atmosphere as a result of a rise of the carbon dioxide content of the air of approximately 0.03 to 0.04 percent as impossible. And it doubted that the curve of the CO2 increase noted on the Hawaii volcano Mauna Loa since 1957/58 could be extrapolated linear back to the 19th century.” (This is a translation from the German)
Beck was the first to analyze in detail the 19th century data. It was data collected for scientific attempts to measure precisely the amount of CO2 in the atmosphere. It began in 1812, triggered by Priestly’s work on atmospheric oxygen, and was part of the scientific effort to quantify all atmospheric gases. There was no immediate political motive. Beck did not cherry-pick the results, but examined the method, location and as much detail as possible for each measure, in complete contrast to what Callendar and Wigley did.
The IPCC had to show that,
· Increases in atmospheric CO2 caused temperature increase in the historic record.
· Current levels are unusually high relative to the historic record.
· Current levels are much higher than pre-industrial levels.
· The differences between pre-industrial and current atmospheric levels are due to human additions of CO2 to the atmosphere.
Beck’s work showed the fallacy of these claims and in so doing put a big target on his back.
Again from my obituary;
Ernst Georg Beck was a scholar and gentleman in every sense of the term. His friend wrote, “They tried to denounce Ernst Georg Beck in the Internet as naive amateur and data counterfeiter. Unfortunately, Ernst could hardly defend himself in the last months because of its progressive illness.” His work, determination and ethics were all directed at answering questions in the skeptical method that is true science; the antithesis of the efforts of all those who challenged and tried to block or denigrate him.
The 19th-century CO2 measures are no less accurate than those for temperature; indeed, I would argue that Beck shows they are superior. So why, for example, are his assessments any less valid than those made for the early portions of the Central England Temperatures (CET)? I spoke at length with Hubert Lamb about the early portion of Manley’s CET reconstruction because the instruments, locations, measures, records and knowledge of the observers were comparable to those in the Hudson’s Bay Company record I was dealing with.
Once the pre-industrial level was created it became necessary to ensure the new CO2 post-industrial trend continued. It was achieved when C.D.Keeling established the Mauna Loa CO2 measuring station. As Beck notes,
Modern greenhouse hypothesis is based on the work of G.S. Callendar and C.D. Keeling, following S. Arrhenius, as latterly popularized by the IPCC.
Keeling’s son operates Mauna Loa and as Beck notes, “owns the global monopoly of calibration of all CO2 measurements.” He is also a co-author of the IPCC reports, which accept Mauna Loa and all other readings as representative of global levels. So the IPCC control the human production figures and the atmospheric CO2 levels and both are constantly and consistently increasing.
This diverts from the real problem with the measures and claims. The fundamental IPCC objective is to identify human causes of global warming. You can only determine the human portion and contribution if you know natural levels and how much they vary and we have only very crude estimates.
What Values Are Used for Each Component of the Carbon Cycle?
Dr. Dietrich Koelle is one of the few scientists to assess estimates of natural annual CO2 emissions.
Annual Carbon Dioxide Emissions GtC per annum
1.Respiration (Humans, animals, phytoplankton) 45 to 52
2. Ocean out-gassing (tropical areas) 90 to 100
3. Volcanic and other ground sources 0.5 to 2
4. Ground bacteria, rotting and decay 50 to 60
5. Forest cutting, forest fires 1 to 3
6. Anthropogenic emissions Fossil Fuels (2010) 9.5
TOTAL 196 to 226.5
Source: Dr. Dietrich Koelle
The IPCC estimate of human production (6) for 2010 was 9.5 GtC, but that is total production. One of the early issues in the push to ratify the Kyoto Protocol was an attempt to get US ratification. The US asked for carbon credits, primarily for CO2 removed through reforestation, so a net figure would apply to their assessment as a developed nation. It was denied. The reality is the net figure better represents human impact. If we use human net production (6) at 5 GtC for 2010, then it falls within the range of the estimate for three natural sources, (1), (2), and (4).
The Truth Will Out.
How much longer will the IPCC continue to produce CO2 data with trends to fit their hypothesis that temperature will continue to rise? How much longer before the public become aware of Gray’s colorful observation that, “The anthropogenic only portion of atmospheric CO2, let alone China’s portion, does not have the cojones necessary to make one single bit of “weather” do a damn thing different.” The almost 18-year leveling and slight reduction in global temperature is essentially impossible based on IPCC assumptions. One claim is already made that the hiatus doesn’t negate their science or projections, instead of acknowledging it, along with failed predictions completely rejects their fear mongering.
IPCC and EPA have already shown that being wrong or being caught doesn’t matter. The objective is the scary headline, enhanced by the constant claim it is getting worse at an increasing rate, and time is running out. Aldous Huxley said, “Facts do not cease to exist because they are ignored.” We must make sure they are real and not ignored.
[1] Reproduced with permission of William Kininmonth.
Alan Robertson says:
August 5, 2014 at 6:51 pm
“There appears to be no curve in the line in the last decade, the slope appears constant.”
Exactly. The T trend has been flat, which means a constant slope in CO2, because the rate of change of it is essentially an affine function of temperature. When temperatures are flat, its rate of change is flat, i.e., it is moving at effectively constant rate.
Contrast to emissions, for which the slope is not constant.Emissions are accelerating.
“The decades 1959-1979 do not show increasing T trend.”
They show a trend, which begets curvature in overall CO2.
You keep seeming to want CO2 to be affine to temperature. It isn’t. The rate of change is. That is the empirical result.
Phil. says:
August 5, 2014 at 7:24 pm
I will leave it to you to inform NASA that they should return to epicyclic theories to predict the orbits of their space vehicles.
That’s the problem with theories unmoored from physical principles. They work just fine, right up until the time that they don’t.
Bart, I’m not sure that you have found evidence that CO2 follows a rise in ocean temperatures. I think that you found that you can’t leave a climate scientist alone for 10 minutes with a data base.
richardscourtney says:
August 5, 2014 at 2:45 pm
Several measurements at Poonah were taken below the growing leaves: these easily show levels of 1,000 ppmv and more. Make a pocket hole in the ground and measure CO2: the soil bacteria provide levels of CO2 which may be in the thousands…
A simple indication for the unreliability of the historical data is by looking at the variability: if the variability is high, then the location was unsuitable for “background” measurements. For Poonah, the range of the measurements was between 300 and 700 ppmv:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/misra_wind.jpg
Interesting: the higher the wind speed, the lower the CO2 levels, which shows the better mixing with the bulk of the atmosphere at high wind speeds.
Anyway, there is not the slightest resemblance between “background” CO2 levels in the bulk of the atmosphere and what was measured at Poonah. Reason why Callendar rejected all measurements taken for agricultural purposes.
Indeed, Beck compiled an enormous amount of data. His error was that he didn’t use any criteria for the exclusion of data like these of Poonah, which makes that his compilation can’t be used as a base for historical “background” CO2 levels…
I should like to ask Ferdinand his view on the assumption that CO2 is ‘well-mixed’ in the atmosphere, where Beck’s work shows that it might not be so. As I understand it, if this assumption is incorrect, then the value of the Mauna Loa series becomes equivalent to that of a single thermometer series in terms of evidence of climate change.
Bart says:
August 5, 2014 at 7:58 pm
Phil. says:
August 5, 2014 at 7:24 pm
I will leave it to you to inform NASA that they should return to epicyclic theories to predict the orbits of their space vehicles.
That’s the problem with theories unmoored from physical principles. They work just fine, right up until the time that they don’t.
It would work just fine since any orbit can be represented by a number of epicycles, just like a curve can be represented by a Fourier series.
TonyN says:
August 6, 2014 at 1:09 am
TonyN, CO2 is well mixed in 95% of the atmosphere: that is everywhere over the oceans and above a few hundred meters over land up to 30 km height.
It is not well mixed near huge sinks and sources, that is in the middle of towns, forests, fields,… especially under inversion at night, when vegetation emits CO2 which accumulates without wind to disperse it. That is the problem with the historical data: many of them were taken at places with huge diurnal, day by day, monthly and seasonal variability. That is the equivalent of taking temperature on a hot asphalt parking lot…
Well mixed doesn’t mean that CO2 is exactly the same at every moment of the year at every place on earth, but that any huge change is readily distributed over the rest of the atmosphere.
The variability from Alert (Canada) to the South Pole is not more than 2% of full scale, while the seasonal CO2 exchanges are around 20% of all CO2 in the atmosphere. In my opinion, that is well mixed…
Even if you take only one station as the standard, it doesn’t matter, as all “background” stations show the same trend over the years, where stations at altitude lag ground level stations and SH stations lag NH stations:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends_1995_2004.jpg
To give an example: at some places (e.g. in the Rocky Mountains, Colorado) regular flights are done, measuring CO2 levels at different heights. Have a look what that gives with inversion in the valley, where they start measuring:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/inversion_co2.jpg
Up to 500-600 meters you can find high levels of CO2, above that the values were the same for Mauna Loa at a distance of 6,000 km for the same days…
Tonyb says:
August 5, 2014 at 3:27 pm
After reading very many hundreds of papers and realising that taking measurements was a well established everyday occurence through much of the 19 th century enshrined in law by an act of parliament in the 1890’s , it seemed strange to me that many hundreds of clever scientists were apparently incapable of taking measurements after 110 years of trying.
It’s not that they were incapable of taking measurements (although some of the techniques have flaws) it’s how representative the samples are of the atmospheric value, taking readings in the vicinity of a source or sink is a problem. After all that’s why the measurements were being taken in the factories, because it was recognized that those values would be perturbed from those in pristine air. Regarding the ‘wet’ chemistry methods, titration was normally carried out by pipetting by mouth, I’m sure you can see the potential for error there? There was no established procedure of consistency checks by comparison with standard gas samples unlike now, in my lab at one point I had about 40 different standard gas mixtures for calibration of GCs, spectrometers etc.
J R Bray (1958) reported that tests of the Pettenkofer method versus standard gas samples showed that the Pettenkofer method measured high, the actual values being from 66-89% of the measurements. The Pettenkofer method was the method of choice in the 19th century.
Callendar undoubtedly selected low readings for his seminal paper in 1938 which was roundly taken apart by Slocum in 1956 keeling was influenced by callendar. What the truth of past levels of co2 will turn out to be I do not know. It is a shame they can not be properly audited by competent independent scientists as that would put the matter to rest.
They have been, I’m sure you’ve read Bray’s work, where he gives a detailed analysis of methods, sampling, etc?
He concludes:
“The above analyses indicate by a variety of comparative techniques an increase in measured CO, for most comparisons during the past 100 years; in some cases the increase is statistically significant, in others not so. There appear to be several possible explanations for this increase, each of which may be in part correct: I) an atmospheric increase as suggested by Callendar, from industrial activity and from clearing, draining and burning of vegetation; 2) a coincidence of the influence of micro- atmospheres with areas of lower concentration having been sampled by chance in the 19th century and areas of higher concentration sampled more recently; 3) improvement (or change) in chemical technique, with gradually improving techniques having first reduced the measured values from 500 or more ppm to less than 300 by the end of the 19th century and then raised the value to over 300 in the recent century. The second and third possibilities appear less likely since a>an attempt was made to minimize the influence of micro- atmospheres in the selection of data; b) comparisons of identical techniques between two periods show increases with the exception of the earliest comparison, although sufficient studies are not available to establish statistical significance, and the most recent increase is based on questionable values.
The lack of significant differences in 9 of the 12 comparisons in Table 5 and the possibility that these differences may be related to factors other than an atmospheric increase emphasizes the need, noted by Fonselius et al. (1956), for continuing studies with similar techniques in ideptical locations. Further establishment of permanent CO2 stations might consider some of the criteria noted for selection of data in the present study especially those indicating the importance of local CO2 sources (urban and industrial areas; animals and plants, especially soil organisms) and sinks (plants). Land CO2 stations should be on areas in which the surrounding vegetation can be protected from drastic change and is as close to CO2 equilibrium as is possible. The influence of height above ground deserves attention; as noted by Huber(1952), the degree of daily variation decreases with height.”
That is exactly what has been done and led to the results we have over the last half century.
http://onlinelibrary.wiley.com/store/10.1111/j.2153-3490.1959.tb00023.x/asset/j.2153-3490.1959.tb00023.x.pdf;jsessionid=284D49AEF8277ABEB470C39B330E96EA.f03t04?v=1&t=hyikoupt&s=02e4169c5831d81b232469c17e74bd86baea5ef0
Tonyb says:
August 5, 2014 at 2:52 pm
Hi Ferdinand, hope you are keeping well.
After a few repairs, thanks to modern chirurgy (bypasses), again alive and kicking…
Keeling knew nothing of taking co2 measurements when he started his job, yet within a year apparently he was taking measurements to a far greater degree of accuracy than the highly experienced scientists before him stretching back some110 years?
Don’t underestimate Keeling! Have a look at his autobiography: from page 31 on his story of CO2 measuring starts where he describes how he was getting involved.
http://scrippsco2.ucsd.edu/publications/keeling_autobiography.pdf
He designed/built his own instruments and one of his instrument with an accuracy of 1:40,000 for the calibration of CO2 equipment and calibration gases still was in use until a few years ago at Scripps.
OK, so here’s a question for the mathematically inclined part of the group. I probably can answer it for myself, but I’m curious as to whether anyone has already worked it out.
CO_2 is a strong absorber in the relevant IR bands. The mean free path of photons at concentrations of 300-400 ppm is order of a meter or few (IIRC) making it effectively opaque — the well-known fact that CO_2 is saturated as a greenhouse gas, so that the only variation in Lambert-Beers comes from the very slow, logarithmic increase in absorption with concentration. However, this very same factor means that photons from the surface only penetrate a very few meters as they are emitted upwards before they are absorbed — it is the surface layer itself that is warmed by the ground and that, in turn, radiates some fraction of the absorbed energy back down at the ground to slow its net rate of radiative energy loss, with the rest gradually diffusing upwards until it eventually gets out when the mean free path becomes big enough to reach “infinity” without another aborption/emission.
What, then, is the net effect of a layer of air on the ground 1-100 m thick where the CO_2 concentration is routinely 500 to 1000 ppm, gradually tailing off into the background “well-mixed” concentration? Note well that this “ground layer” thickness is totally invisible to GCMs, that IIRC typically have a vertical grid dimension of 1 km (and at that, have to neglect all kinds of dynamics in the vertical direction and replace it with approximations). It may be short relative to 1 km but it is long relative to the MFP, effectively completely opaque with absorptivity governed by the higher concentration, not the presumed well-mixed background that re-establishes when one is high enough off of the ground.
This is relevant to the UHI effect, as many if not most metropolitan areas are active sources of both CO_2 and water vapor from burning stuff — gasoline in cars, natural gas in furnaces and stoves, oil in furnaces, various biofuels (wood, charcoal, animal dung) for cooking and heating (not just in the first world cities), respiring humans. This can easily nonlinearly supplement any heating associated with the replacement of grassland and trees with asphalt parking lots, dark rooftops, etc, trapping heat absorbed during the day on the low-albedo surfaces instead of letting it radiate away as it normally would at night. It is also relevant to non-urban heating (or at least heat retention) in the vicinity of agriculture that generates 1000 ppm CO_2 concentrations in the ground layer. Airports (where “official” weather stations are often located) tend to be places where enormous jets take off and land, dumping huge boluses of CO_2 and water vapor in a very concentrated form directly into the air above weather stations even when those stations are otherwise decently located (usually, they are terribly located e.g. a few meters from asphalt runways that superheat during a summer day and retain heat long into a winter night). It can be confounded in many ways with other phenomena — sunny days are often atmospheric high pressure days, which are in turn days where the absorptivity is modulated upwards due to pressure broadening; cloudy days are often low pressure days and absorptivity is modulated downwards due to pressure broadening.
The increasing divergence between SSTs, LTTs, and LSTs suggest that a lot of what is being interpreted as global warming is really local warming — basically HHE (human habitation effect) warming in the vicinity of human habitations of all sorts — warming due to agriculture or land use changes, UHI warming, warming due to alterations in local GHG concentrations a factor of 2-3 larger than the average atmospheric concentration in an easily optically thick layer (but one that GCMs cannot resolve or explicitly treat, assuming that we had any way of tracking CO_2 sources at the required spatiotemporal granularity).
Details like this seem like they would matter a lot, both by introducing a systematic error into the computations of global average surface temperature, which already fail to account for “ordinary” UHI heating anything like accurately and more or less completely ignore the rest of HHE heating. This is an easy explanation for the failure of LTTs and SSTs to track ground surface temperature increases (such as they are) in the major temperature models — the models are simply failing to correctly do the ground averages because they are smearing local HHE warming in the thermometric record out into the non-human habitated countryside and ocean on the one hand, and failing to correctly account for the radiative and other heat transfer dynamics associated with HHE variability at spatiotemporal scales several orders of magnitude smaller than the various grid scales used in GCMs. The former adds a non-physical average warming that increases strictly with human population, the latter means that the models were normalized with an effective CO_2 level (the well-mixed concentration) that is much lower than the real concentration almost anywhere near the surface of the ground for much of the diurnal radiative cycle, and that is correspondingly logarithmically less sensitive to increases.
To put it in simple terms, consider a resistance model. Imagine we have a capacitor that is being charged with some fixed input current, It is shorted out by three resistors — two in series, both in parallel with the third “shunt” resistor. I know the physical dimensions of the two in series — their conduction cross-section and length — and know the resistivity of one of them and know only that the resistivity and net resistance of the other is considerably larger.
I then measure the charge on the resistor in equilibrium, neglect the larger resistor, and compute a value for the shunt resistor and based on the assumption that the lower resistance resistor is all there is in the series leg. Since the actual resistance in the series pathway is higher, my value for the shunt resistance that “fits” this equilibrium charge is now too high.
I then wonder what happens if I add (say) 10% of the resistivity of the lower resistivity material to both resistivities in the series path. Since in my model the shunt resistance is too large and the large series resistance is ignored, I find that the equilibrium charge goes up strongly as I increase the resistivity of the weaker series resistance.
But what really happens? Suppose that the larger resistivity material initially had an equal total resistance (R) and a resistivity 10x that of the lower resistivity material. The series resistance was actually twice as large as I assumed when evaluating the shunt, so the shunt resistance has to be correspondingly lower for the total parallel resistance to correctly match the observed initial charge. Adding 10% to the resistivity of the lower resistivity material makes its resistance 1.1R, but adding the same absolute amount to the resistivity of the higher resistance only increases it by 1%, it goes to 1.01 times its initial value.
Instead of increasing the resistance in this leg from R to 1.1R, one has increased the resistance from 2R to 2.05 R. The shunt resistance is correspondingly lower. I’ve increased the resistance of the drain by strictly less than half as much as I computed ignoring the higher resistivity but shorter path. The temperature increases by strictly less than half my previous estimate.
If the resistance of the shorter path is actually larger than the resistance of the longer path, this problem gets worse.
This example has perfect analogues in climate — the input current is incoming radiative power from the sun, the shunt resistances are things like albedo, vertical heat transport via convection and latent heat, and the two series resistors are the (neglected) thin but optically thick ground surface layer with CO_2 that is already as much as twice the well-mixed value of the considered outer layer of the atmosphere. The lower the effective thermal resistance of the “shunt” — the more effective the alternative pathways are at dumping heat — the lower the equilibrium temperature. The more important/larger neglected series radiative resistance is compared to the well-mixed estimate, the greater the semi-empirical fit error to both the shunt resistance terms and the corresponding increase in temperature resulting from increasing both the well mixed and much higher component of the atmospheric CO_2 by the same absolute amount.
rgb
Phil. says:
August 6, 2014 at 4:46 am
“It would work just fine since any orbit can be represented by a number of epicycles, just like a curve can be represented by a Fourier series.”
Sure thing. I want to launch a satellite into a sun synchronous orbit at 900 km altitude with a 3 AM ascending node. Use your epicycles to tell me how to do it.
Phil
Thanks for your comments. Always appreciate them even if we may disagree. I am sorry you are often given a hard time here.
As for your link it went straight to a rather scary ‘Forbidden!’ message.
Do you have an alternative link? Thanks
tonyb
rgbatduke says:
August 6, 2014 at 8:42 am
What, then, is the net effect of a layer of air on the ground 1-100 m thick where the CO_2 concentration is routinely 500 to 1000 ppm, gradually tailing off into the background “well-mixed” concentration?
I do have only some basic knowledge about the effect of radiation – collisions – re-radiation by CO2, but fortunately the US army has done a lot of fundamental research work on this topic by measuring the line by line absorbance of a lot of CO2-water-CH4, etc. mixtures in air at different atmospheric pressures, integrating them over height. A simpler form of that program (Hitran) is on line available for experimentation (Modtran):
http://climatemodels.uchicago.edu/modtran/
I have used it to check if 1,000 ppmv in the first 1,000 meter over land would make much difference compared to background 400 ppmv up to the same height. It hardly makes a difference: at maximum 0.1°C extra.
As the increased levels in general are less high and up to less height, I suppose that the influence of the extra CO2 near ground is negligible.
Water vapor may be a different story (but can’t be independently changed in Modtran). Here on WUWT were different stories about the influence of irrigation on local temperatures, including one from Dr. Spencer, but sorry, I have no references.
climatereason says:
August 6, 2014 at 9:40 am
Phil
Thanks for your comments. Always appreciate them even if we may disagree. I am sorry you are often given a hard time here.
As for your link it went straight to a rather scary ‘Forbidden!’ message.
Do you have an alternative link? Thanks
No worries. I’ll see if I can find one, that one works fine for me.
Phil
No, I get ‘forbidden’ on all my three devices. Hope you can find an alternative link.
Tonyb
Ferdinand, MODTRAN fixes the temperature of the surface, so at best you are modeling the immediate change if a pulse of CO2 were emitted suddenly, not the climate response which occurs throughout the atmosphere, up to the level at which the atmosphere is no longer opaque at line center.
Bart says:
August 5, 2014 at 7:54 pm
Alan Robertson says:
August 5, 2014 at 6:51 pm
“The decades 1959-1979 do not show increasing T trend.”
—
“They show a trend, which begets curvature in overall CO2.”
You keep seeming to want CO2 to be affine to temperature. It isn’t. The rate of change is. That is the empirical result.
____________________
The T trend in period 1959-1979 is decreasing, yet the curvature of the graph of CO2atm indicates an increasing slope of the line during that period, which indicates an increasing amount of CO2atm. If the rate of change of atmospheric CO2 tracks temperatures, or temperature trends, then wouldn’t the line slope be decreasing in this example? Here’s a closer look at that time period:
http://www.woodfortrees.org/plot/esrl-co2/normalise/mean:48/from:1959/to:1979/plot/hadcrut4gl/from:1959/to:1979/mean:48/plot/hadcrut4gl/from:1959/to:1979/trend
Tonyb says:
August 6, 2014 at 1:31 pm
Phil
No, I get ‘forbidden’ on all my three devices. Hope you can find an alternative link.
Tonyb
Try this Tony:
http://onlinelibrary.wiley.com/store/10.1111/j.2153-3490.1959.tb00023.x/asset/j.2153-3490.1959.tb00023.x.pdf?v=1&t=hyj7r4mc&s=ebc6df157c8c734ad9d92d3f263431a8ec98f1ff
Alan Robertson says:
August 6, 2014 at 2:21 pm
You are over-smoothing, and not using the best data set. The “curvature” you see is actually more like a step down, and then back up again. The fit isn’t bad. Moreover, there has always been a better fit with Southern hemispheric temperatures, which may suggest this is primarily an oceanic phenomenon.
The remaining small discrepancies really just go to show how strong the relationship is. Given a temperature relationship, it is almost certainly not between bulk averaged Southern hemisphere temperatures and CO2, but between some globally weighted sum total of temperature effects to CO2, reflecting the differing contributions of different regions to the outcome. We do not here have the data to estimate a proper weighted sum, just straight averages. Yet, even with those, we get striking correlations.
Contrariwise, if CO2 is, as claimed, a well mixed gas, and we are responsible for its change, then there should be a 1:1 correspondence between our bulk emissions and observed CO2. But, there isn’t. As even Ferdinand has effectively admitted, to keep alive the hypothesis that we are responsible, we have to assume that nature is currently taking out a greater share of our emissions than it did formerly. And, that is loading up the theory with epicycles in an attempt to rationalize the fact that the two quantities are really just not tracking one another.
Phil
Thanks for that.
I corresponded with Beck as I finished my own paper. He was rather dismissive of Bray. At that time he was working on producing a core of several hundred samples that met all the criteria that his critics had put forward. I think they were based on this paper;
—— ======
The last outstanding work published from Dr. Francis Massen, Luxembourg with Georg Beck at the “Climate 2009″, Nov 2-6, 2009, earned the first place of all published and reviewed papers
this is the paper
http://www.biokurs.de/treibhaus/CO2_versus_windspeed-review-1-FM.pdf
“A validation check has been made for 3 historical CO2 series. The overall impression is one of continental European historic regional CO2 background levels significantly higher than the commonly assumed global ice-core proxy levels.
The CO2 versus wind-speed plot seems to be a good first level validation tool for historical data. With the required caveats it could deliver a reasonable approximation of past regional and possibly past global CO2 background levels.”
—– ——
tonyb
climatereason says:
August 6, 2014 at 5:02 pm
IMO ambient CO2 might well have been in the 400s ppm in industrial 19th century Europe, but rained or snowed out before spreading globally. Now CO2 is produced more evenly around the globe, from many sources, including mobile, and deposited at higher altitudes.
Which makes me wonder what its concentration might be in the neo-Dickensian coal-fired cities of China today.
Bart says:
August 6, 2014 at 4:04 pm
“… if CO2 is, as claimed, a well mixed gas, and we are responsible for its change, then there should be a 1:1 correspondence between our bulk emissions and observed CO2. But, there isn’t. ”
_________________
We know that CO2 is only well mixed 500 meters or so above the surface, but this is only tangential to the discussion. Why should we assume a 1:1 correspondence between emissions and observed CO2? Wouldn’t many assumptions about poorly understood and chaotic systems have to be made for that to be true?
————–
“…to keep alive the hypothesis that we are responsible, we have to assume that nature is currently taking out a greater share of our emissions than it did formerly.
_____________
Isn’t there near universal agreement that the biosphere increases growth with increase of CO2atm and temperature in extra- tropical/polar regions? Isn’t that phenomenon measurable and not just an assumption? With the increase in growth rate, isn’t the rate of C sequestration increasing in known and closely associated sinks such as topsoil, peat and woody plant mass?
These factors are difficult to quantify, along with every other factor in the CO2 balance, but are real, nonetheless.
Tropical biosphere expansion is limited by available nutrients and is essentially in CO2 equilibrium, but that growth/CO2 equilibrium point is slowly and marginally incrementing upwards with the increase in available CO2. Polar growth expansion is damped by temperature, but nevertheless has been demonstrated as increasing.
A couple of quick anecdotes about the way in which the warmists twist facts to heighten fear of changing climate: there was a study several years ago in which certain tundra plants were exhibiting greater growth and even retaining flowers later in the growing season. The climate fearosphere used that fact as a point of fear- “oh, but they shouldn’t be flowering so late- that will be really harmful to the plants when they freeze”. Similarly, studies showed increased growth in tropical rainforest plants and instead of seeing that as welcome news, the fearmongers replied- “oh, but the strangler fig vine is growing faster than old growth trees and will overcome them easier”.
While man’s enhancement of all life on the planet by increasing atmospheric CO2 was initially inadvertent, now that we know the positive effects of our actions, the response of those who claim to represent the interests of all life on the planet are seemingly trying their best to shut down man’s contributions.
Drill baby, drill.
sturgishooper says:
August 6, 2014 at 5:09 pm
Which makes me wonder what its concentration might be in the neo-Dickensian coal-fired cities of China today.
_____________
Neo- Dickensian… I like that.
Alan Robertson says:
August 6, 2014 at 5:31 pm
“Wouldn’t many assumptions about poorly understood and chaotic systems have to be made for that to be true?”
On the contrary, I think it requires more speculative assumptions to assume it is not true.
I may have confused the issue a bit saying 1:1. What I mean by that is 1:1 modulo affine similarity. If emissions are driving atmospheric concentration, then it out at least to accelerate when emissions accelerate, and decelerate when they decelerate. It ought to be conformal.
“Isn’t there near universal agreement that the biosphere increases growth with increase of CO2atm and temperature in extra- tropical/polar regions?”
Yes, but the standard first order assumption would be that it would increase proportionately. Here, we have to assume it is increasing not in tandem with emissions, but by a greater factor than the emissions, that it is soaking up the emissions with accelerating vigor as time goes on. That is really reaching out on a limb, especially when it suggests a self-perpetuating dynamic which would, if left to its own, completely suck every last molecule of CO2 out of the atmosphere. Especially, when there is an alternative explanation which requires no such speculation.
It is far less speculative to assume that there is nothing exotic going on, and that the obvious, empirical relationship with temperatures indicates precisely what it appears to be indicating: that the driving force in atmospheric concentration is not human industry, but a temperature dependent, natural process.