With apologies to The Treasure of the Sierra Madre, here’s a a comment worth repeating from the Hit and Misses thread.
What I find interesting about the entire email corpus is the focus on the minutia of the statistics and the different proxies. In none of the emails from the core team members do we see any physics of radiation. It seems that if it were your role to “prove” the positive feedback of CO2 you would want to actually do some physics of radiative and convective transfer of energy in the atmosphere. This is where the rubber meets the road.
It seems that the entire consensus group have taken an assumption (positive feedback of CO2 increase) and are going deeper and deeper into the details of the proxies in order to show what the results of their assumption are.
I think that this is why as a discipline, more and more physicists are dismissing AGW.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

“Dave Springer says:
November 27, 2011 at 10:54 pm
I’m not sure how you’re getting an 8% increase. 1.00 is a 1.04 is a 4% increase.”
The other 4% was from the added mass of water vapor. (Actually it would only be 4.00 x 18/29 = 2.5% more so I should have been more precise here.) In my example, the water vapor never changed phases, but stayed as water vapor, just a bit warmer water vapor after heat was applied. I am well aware of the latent heat of water vapor. I used to teach that stuff before I retired.
As for the mixing of CO2, there are only minor variations in the CO2 concentration throughout the atmosphere.
It is a common misconception that buoyancy applies to gases as it does to liquids, however that is not the case. If we take a helium filled balloon in a room, it would rise to the top, however a CO2 filled balloon would go to the bottom. However if we poke a hole in each balloon, the individual molecules spread out very evenly. And in the atmosphere outside, the distribution of all gas molecules with the exception of water vapor, is very consistent. If all heavier molecules would sink to the bottom, then we would never have very heavy chlorofluorocarbons in the stratosphere. Helium atoms do escape from our atmosphere. However it is not due to buoyancy. You could have a CO2 molecule and a helium atom in the same place high in the atmosphere at the same temperature. Being the same temperature means the translational kinetic energy is the same. Kinetic energy is calculated by the formula E = 1/2mv2. Since the helium atom is much lighter than the CO2 molecule, its velocity at the same temperature is much higher so it can reach escape velocity and thereby leave Earth. By contrast, Jupiter has a larger gravitational field and a lower temperature than Earth, so it can hold on to its hydrogen and helium.
So if no one has had some handy links for you, you might be able to find the papers fairly easily using google scholar – or even searching here at WUWT since there’s a very good chance that such studies wound up with posted articles (tho I have to note, I seem to have much better luck using google w/ “site:wattsupwiththat.com xyz” than just using the WUWT site search for “xyz”).
The problem with this approach is that there is a huge amount of research and papers from the 1940’s through 1970’s that do not exist on the Internet. I can tell you that I have not been able to find a single reference to some of the books on IR radiation that sit on my desk due to my inordinate fondness for old science books and journals and my purchase of the Morton Thiokol technical library about 15 years ago.
I can tell you from the circa 1964 DARPA book that I found in a used book store that they emphatically state that CO2 concentrations vary widely at altitudes less than 100 meters and that this observation was based upon a lot of experimental data gathering in the 1950’s. What many people who are younger than 50-60 don’t understand is that the 1940’s to 1960’s was the heyday of the experimental physicist. Today most universities discourage even the awarding of an engineering physics degree. My undergrad in engineering physics had almost as many hours as a masters degree and still the physics and engineering schools argued over the title of my degree!
We are far too reliant today on computer models and computer modeling and we need to bring experimentation back into this process and into climate change studies.
The experiments that I have discussed here are 100% doable today, hell we did them 60 years ago! The USAF may not want to release some of this data due to national security (if you have to ask, you don’t need to know), but it is there and some of the computer models are based upon downstream versions of this data.
@ur momisugly Robert Brown
I almost hate to post this considering the time and effort and excellent information in your posts…. but it seems to me that there is another, far simpler and more likely scenario than the two you posted about.
At least for the initial ClimateGate release, after having read thru quite a few threads, it sure appeared to me that the file(s) had almost certainly been compiled from systematic keyword searches in order to prepare possible response to FOI requests. There can be huge sets of keywords that wind up included in the records search. In other words, by the IT personnel with help as needed from lawyers and those who would know what keywords to use to search all possibly relevant servers and records for emails or files that might be applicable to the FOI. We all know that those FOI requests were stonewalled, and the information not released, but that doesn’t mean that it wasn’t compiled as chronological entries of all the emails that could possibly have been applicable. I’ve seen this sort of thing done in a quite large corporate/government environment before. It explains the patchy chronology (e.g., emails that didn’t have any of the keywords never make it into the almost automatically compiled files).
Then the leaker/hacker only has to know about, or manage to find, the large resulting file(s). Anyone involved in the process on the IT or ‘helping IT’ side would both know about them, and probably where/which server they wound up being stored on. Copy those off, and presto, they were good to go. This obviates any need to search many different locations, or to have to copy off massive amounts of emails and winnow it down to relevant ones only, and so on. It also means that they wouldn’t have had to know that much about the science involved to be able to winnow wheat from the chaff – that was already done in the process of preparing/evaluating the emails for the FOI.
I also have to comment about the suggestion of taking polygraphs of staff members or anyone possible leakers – I don’t believe there is any way police could force anyone to take a polygraph. At least not in the USA. They can ask if someone is willing, but no way they can compel it, and the results aren’t admissible in court anyhow. Polygraphs are notoriously inaccurate with, IIRC, estimates of accuracy when using well trained and experienced evaluators is anywhere between 50% to 80% accuracy. Those with less experience or who just aren’t very good at it can have even lower accuracy rates. The test providers have also been found to be quite subject to confirmation bias if anyone plants the idea in their minds that a certain person or persons are the likely culprit (of whatever is being looked for). The difficulty of course is that if you are asked to take one, and back out because you don’t trust the accuracy, some people may take that as a sign of guilt even when that’s totally incorrect and you are just knowledgeable and leery of the pitfalls and unreliability of polygraphs in general.
Anyhow, I fully admit that I may be wrong about the files being directly prepared for possible FOI response – but that would be my bet and it seems to fit what is known and make far more sense than other options. I’d also think that the emails were leaked rather than hacked – even if I’m correct about how the data was compiled – it just seems more likely for an internal leak to have occurred than for someone to manage to hack in and find a few already complied chronological sets put together for possible FOI responses…
I have no problem with the concept that nobel gases likely wind up being well mixed, but a more difficult time believing that to be true with other gases. Particularly when it’s one that has an active dynamic interaction with both plants, animals, and bodies of water…
A few articles on uneven atmospheric CO2 concentrations or mixing:
http://wattsupwiththat.com/2011/10/02/global-warming-potentials/ (with graphic of global variation in concentrations)
http://wattsupwiththat.com/2009/12/16/nasa-says-airs-satellite-data-shows-positive-water-vapor-feedback/ Chahine said previous AIRS research data have led to some key findings about mid-tropospheric carbon dioxide. For example, the data have shown that, contrary to prior assumptions, carbon dioxide is not well mixed in the troposphere, but is rather “lumpy.” Until now, models of carbon dioxide transport have assumed its distribution was uniform
http://wattsupwiththat.com/2008/07/29/co2-well-mixed-or-mixed-signals/ with graphic of global variation in concentrations)
re post: Dennis Ray Wingo says: November 29, 2011 at 12:43 am
I couldn’t agree with you more. While I’m a bit young to have personally experienced the physics heyday you refer to, I know that there are reams of excellent research that aren’t available or referenced on the net, unfortunately. I also have a very high regard for much of the experimental research conducted during those decades – much of it was quite well thought out, rigorous, meticulous, solidly documented, and thus resulted in meaningful data and conclusions. In other words, what science is actually supposed to be! I and am becoming more and more disillusioned and disgusted with what seems to pass for experiments and research here recently – especially with the degree to which results and conclusions seem to go far beyond what the data supports.
Dennis (or do you prefer Dennis Ray?), I wonder if something couldn’t be made of Gail Combs idea of approaching non-governmental sources for funding to conduct some of the types of experiments you are referring to? The funding wouldn’t even have to come from a single source… but I confess I have no clue how one would go about appropriately contacting parties such as the Koch brothers about possible research.
But I’d sure like to see it happen, particularly if comparison to solid data from the past as you refer to could likely settle the AGW issue. I’m sure many others here would also love to see this happen!!!
Dennis Ray Wingo says:
November 29, 2011 at 12:43 am
So if no one has had some handy links for you, you might be able to find the papers fairly easily using google scholar – or even searching here at WUWT since there’s a very good chance that such studies wound up with posted articles (tho I have to note, I seem to have much better luck using google w/ “site:wattsupwiththat.com xyz” than just using the WUWT site search for “xyz”).
The problem with this approach is that there is a huge amount of research and papers from the 1940′s through 1970′s that do not exist on the Internet. I can tell you that I have not been able to find a single reference to some of the books on IR radiation that sit on my desk due to my inordinate fondness for old science books and journals and my purchase of the Morton Thiokol technical library about 15 years ago.
I can tell you from the circa 1964 DARPA book that I found in a used book store that they emphatically state that CO2 concentrations vary widely at altitudes less than 100 meters and that this observation was based upon a lot of experimental data gathering in the 1950′s. What many people who are younger than 50-60 don’t understand is that the 1940′s to 1960′s was the heyday of the experimental physicist. Today most universities discourage even the awarding of an engineering physics degree. My undergrad in engineering physics had almost as many hours as a masters degree and still the physics and engineering schools argued over the title of my degree!
We are far too reliant today on computer models and computer modeling and we need to bring experimentation back into this process and into climate change studies.
The experiments that I have discussed here are 100% doable today, hell we did them 60 years ago! The USAF may not want to release some of this data due to national security (if you have to ask, you don’t need to know), but it is there and some of the computer models are based upon downstream versions of this data.
===================
The arguments about this are encapsulated in AGWScienceFiction v Beck. AGWSF denies Beck is relevant because they abide by the claim that ‘CO2 is well-mixed’, which as I’ve tried to explain is promoted by corrupt basics and, by claiming that the ‘background’ measurements of CO2 from Mauna Loa show parity with measurements from other stations.
Firstly, they don’t understand Beck because they don’t understand that CO2 being heavier than air will not readily rise into the atmosphere, so CO2 production and levels are naturally local, not this mythical ‘well-mixed background’. CO2 also readily combines with water to form carbonic acid, all pure rain is carbonic acid. Again, even if it’s windy and CO2 is on the move from the area it was produced in, all rain brings it back down to Earth, it still won’t travel far from its source as water vapour rises and condenses out into rain, or it’s captured in fog, dew, general humidity (cold doesn’t mean the atmosphere isn’t humid). Stuff rusts because of this carbonic acid, reminds me I have to paint the garden furniture..
Secondly, the Mauna Loa data is simply not reasonable. It was begun by Keeling who claimed after less than two years of data gathering, that he had found this mythical background was rising from human imput. He began with an agenda and his son ran the other stations which now so readily agree with Mauna Loa, then taken over by big government when the AGW bandwagon was rolling.
Mauna Loa is described in the spiels about this as a “pristine” site for measuring this unproven ‘background’, that is, uncontaminated by local production of CO2. Mauna Loa is the world’s biggest active volcano, it sits surrounded by active volcanoes in an area of extreme hot spot activity building islands, thousands of earthquakes above ground and under water every year, the seas are warm.., etc. It is shear nonsense to claim that it is possible to separate out this mythical ‘background’ from the immense production of carbon dioxide in the atmosphere around the station.
They don’t try. What they do, you can find descriptions of their methods, is choose a cut off point above which they say is ‘volcanic’ and below which they say is ‘background’. The Keeling curve goes ever upwards regardless of the global temperature changes and regardless of CO2 levels – which have continued to rise as global temps have continued to fall in the last couple of decades, and so on. Keeling cherry picked his starting point by ignoring all the previous work which shows great variation and local. And this goes way back into the century before last. He had never proved that there was such a thing as ‘background’.
By it’s nature Carbon Dioxide is going to be a home-body. This is what the AIRS concluded, that it was local and lumpy and they would have to look at wind systems a bit more..
I don’t know how we can get the missing AIRS data.
Mauna Loa is described in the spiels about this as a “pristine” site for measuring this unproven ‘background’,
I really don’t know about the quality of the Mauna Loa data and to come to a definitive answer would require spending quite a lot of time understanding the exact experimental set up. I do know, having just been on the big island a couple of weeks ago that there is frequently a volcanic fog or “vog” that occasionally kills entire crops and animals. However, if it is CO2 and it is heavier than air, then in large concentrations it will pool in low spots.
This has happened in Africa where it killed a lot of people in the 1990’s and it is periodically happening at Mammoth Mountain in California (http://pubs.usgs.gov/fs/fs172-96/). CO2 molecules are heavier than air and it defies reason that there would not be local elevated concentrations of the gas as the wet chemists experiments from as far back as the 1840’s have shown. I think that this is part of the reason that the chemists have been dismissed by the climatologists because the experimental results have tended to vary quite a bit. The DARPA book was done by experimentalist long before the AGW controversy and so would tend to put things in their proper context.
As far as obtaining funding for something like an experimental campaign……
It is possible but frankly at the end of the day it would take a lot of full time effort to do so. I have done my volunteer gig several times but would be willing to advise someone who is a good experimentalist or a funding person should they want to fund it.
At the end of the day FOIA, (God bless his or her soul) has done every man, woman, and child on the Earth an enormous service by releasing these emails. At the end of the day, science has been compromised in the service of a political agenda, and FOIA has exposed this.
I am not one who thinks that we should go forward with business as usual, but for different reasons. In the book I wrote on space development I did a historical study on energy. Before coal, the average lifespan of humans was 35 years. At the peak of the coal age in 1900 human lifespans increased to 48 years. Today at the peak of the oil age lifespans are about 80 years. We must go beyond oil if our 9 billion brothers and sisters are to live in a prosperous world. Nuclear fusion is the end state and our goal should be to make a megawatt of electrical power as cheap as a kilowatt is today. Energy is freedom and it should be our goal to make this happen. This is why I have little interest in the low road that the watermelons want us to trod. It is an intrinsic fact that all forms of alternative energy that are pushed by that crowd are low density energy poverty forms that cannot over the long term be sustained. This is where the real fight is an the untold story of the climategate emails is that the “team” are unwitting pawns in this much larger fight for our future.
“Rational Debate says:
November 29, 2011 at 1:47 am
I have no problem with the concept that noble gases likely wind up being well mixed, but a more difficult time believing that to be true with other gases. Particularly when it’s one that has an active dynamic interaction with both plants, animals, and bodies of water…
A few articles on uneven atmospheric CO2 concentrations or mixing:
http://wattsupwiththat.com/2011/10/02/global-warming-potentials/ (with graphic of global variation in concentrations)”
I was aware of this graph when I wrote
“Werner Brozek says:
November 28, 2011 at 10:39 pm
As for the mixing of CO2, there are only minor variations in the CO2 concentration throughout the atmosphere.”
Note that the variation is between 376 and 386 on that picture so that is only a difference of 2.7%. Naturally, the CO2 would be higher for a few hours at the point where it is produced and lower where it is used up. However macroscopic systems tend to go towards an equilibrium in this regard. The question is just how long it takes to reach equilibrium and we cannot ignore the fact that true equilibrium is never reached in the atmosphere. On the other hand, how much difference would it make to global temperatures if the concentration were 376 instead of 386? In my opinion, the slight lumpiness of the CO2 concentration does not affect the overall debate.
Dennis Ray Wingo says:
November 29, 2011 at 9:09 am
Mauna Loa is described in the spiels about this as a “pristine” site for measuring this unproven ‘background’,
I really don’t know about the quality of the Mauna Loa data>>>
I looked into the issue a long time ago and came away pretty satisfied that their data is accurate. Willis Eschenbach did a very deep dive on the matter and came away with the same conclusion, as has Ferdinand Englebeen. If Willis and Ferdinand are OK with it, I’m OK with it.
Beck and the pre Muana Loa data is more difficult to sort out. Beck showed for example that readings taken at night when there was no wind were much lower than readings taken on windy nights. On a windless night in the centre of a corn field, the level of CO2 would be effectively zero because the corn had sucked it all up. To get accurate readings, Beck worked out various technicques for sampling in areas where vegetation could not have a large effect, sufficient wind to ensure the local air was mixed by the turbulance and so on.
@Werner Brozek
Yes, and the 3rd link I provided shows a NASA AIRS globe, with the CO2 range from about 364/5 to 382. Each of this are only 2 dimensional representations – I would have to believe it quite likely that once atmospheric depth is considered, the range is probably often far larger. Perhaps even vastly larger if you are comparing, say, the CO2 levels above actively growing plant life such as perhaps a cornfield, on a sunny summer afternoon, versus in the middle of the night… or above a massive herd of millions of ruminants come together for migration, vs. CO2 levels above that large cornfield on the summer day, or even above barren desert sand dunes.
How well and quickly do those variation disperse and mix into the atmosphere? I’m sure it depends on a huge number of different factors. As you note, however, it almost certainly can’t ever reach equilibrium, not considering the uneven dispersion of life and water on the planet, varied use of CO2 during night vs. day for all plant life, jet streams, varied vertical mixing rates, etc., etc.
Of course, our typical discourse is rather vague and ambiguous – what is ‘well mixed’ to one may be ‘not mixed’ to another, unless we begin actually specifying numbers, and justifying why a particular value has meaning for a particular context. Scientifically, however, a range from 364-382 (or even 376-386), and some existing published research stating typical atmospheric CO2 having a “clumpy” or “lumpy” molecular distribution that is stated as being “not well mixed”… well, that simply doesn’t meet the definition of “well mixed” that I was taught in chemistry/biochemistry/physics/nuclear & radiological safety engineering etc. or that I have used in my career. And as an aside, you note that the smaller range is only 2.7% of the total value – but how meaningful is that when the starting value was about 280 ppm, not 0 – and what we are discussing is a trace gas that supposedly has vast ability to affect our climate based on relatively small changes?
Is the difference, the range of CO2 atmospheric variance, significant enough to be meaningful in climatology? Personally, I don’t know – but I would have to suspect that since we’re talking about incident radiation and subsequent emissions, that it could very well.
Regardless, it certainly seems to be yet another very basic issue intimately associated with AGW that isn’t at all well characterized and represented in models and many climatology papers. They seem to run on the apparently unfounded assumption of ‘well mixed.’ To me, that’s a serious flaw in application of the scientific method, and therefore calls into question everything that is based on it along with any papers that use conclusions from these. Again, would fixing it change conclusions? I don’t know – but you certainly don’t get to good scientifically based answers by using unfounded assumptions and incorrect representations – and you certainly don’t build solid scientific foundations to springboard future discoveries either.
Sure, the research might be correct accidentally or in spite of the assumptions and inaccuracies, but then, that’s not really science, now is it?
Dennis Ray Wingo and DavidMHoffer are absolutely correct on this one. Electrical production costs are typically the lowest or very close to it using nuclear power – and that includes fuel costs, storage of spent fuel, decommissioning, etc. For many many years France produced electricity cheaper than pretty much anywhere else in the EU nations because they use nuclear power (and perhaps still do, I just haven’t checked the data recently).
In the USA, the biggest factors that increased the costs and delayed construction of nuclear power plants were related to regulatory issues and massive legal and physical interference by anti-nuclear activists. Utilities weren’t allowed to put planned nuclear plants into the rate base before and during construction as they were for other types of power plants, resulting in large up front capital financing costs that didn’t happen with other types of power plants. Licensing requirements were often revised multiple times during construction, requiring long delays and expensive retrofitting, often for little or no significant safety improvement. For that matter, while stringent regulations are necessary to ensure safety, in some aspects the regulatory burden is far out of proportion and beyond anything necessary, adding unnecessary expense.
Activists not only physically impeded plant work, but also filed multiple frivolous lawsuits, which halted construction, and of course were also very expensive and time consuming. For example, claims were made of faulty welding on already buried components by people who could have zero way to know anything about the welds in question. These were often on components that were already under concrete and other structures, such that the utility would have to rip everything up, document sound welds, and then rebuild everything that had to be ripped up to get at the components. As a result, utilities began documenting each and every weld, such that they had proof if a lawsuit was filed or claim of this nature was made (plenty of added expense in these sorts of cya efforts).
These sorts of things are what killed any continuing development of nuclear in the USA – the activist/legal forced delays and regulatory uncertainty, not costs of electricity produced by these plants, which is still extremely low/cheap.
“Rational Debate says:
November 29, 2011 at 2:54 pm
Is the difference, the range of CO2 atmospheric variance, significant enough to be meaningful in climatology?”
A recent climate sensitivity estimate is 1.7 to 2.6 for a doubling of CO2. This may well be wrong, however let us assume for the moment that it is correct. That would be similar to knowing that the acceleration due to gravity at Earth’s surface is somewhere between 8 and 12. (At the equator, the g value is 9.78 and at the poles it is 9.83.) But if we did not know these values, but only knew it was between 8 and 12, then being concerned what happens at the poles versus equator is totally irrelevant. That is how I see the issue of the very slight lumpiness in CO2 concentrations. If the best guess is 1.7 to 2.6, then the lumpiness issue is also totally irrelevant at this stage.
@ur momisugly Werner Brozek
Ok, now I’m thoroughly convinced that CO2 is anything but well mixed in our atmosphere. Watch the video at: http://wattsupwiththat.com/2011/02/16/the-life-and-times-of-carbon-dioxide/
Or, ironic timing, see the Japanese graphs: http://joannenova.com.au/2011/11/co2-emitted-by-the-poor-nations-and-absorbed-by-the-rich-oh-the-irony-and-this-truth-must-not-be-spoken/
Werner Brozek;
A recent climate sensitivity estimate is 1.7 to 2.6 for a doubling of CO2. This may well be wrong, however let us assume for the moment that it is correct>>>
Let’s assume that it is correct. Then let us also demonstrate that even if correct, it is an exagerration. I haven’t read that specific paper, but all the sensitivity estimates in IPCC AR3 and AR4 are calculated against the “effective black body temperature” of the earth, which is about -20 C. Given Stefan-Boltzman equation defines temperature for a given radiative balance as:
P(w/m2) = 5.67*10^-8*T^4 with T in degrees K….
And the average temperature of the earth at the SURFACE is about +15 C, the sensitivity estimate applied to the earth’s surface winds up being about 2/3 of of the sensitivity of the “effective black body temperature. That gives us (at earth surface) an estimate of from 1.13 degrees to 1.80 degrees.
Applying that same formula to the tropics, the sensitivity is perhaps a quarter of that, or less. In the arctic regions, it may be double or triple. If the Antarctic goes from an average of -80 degrees to -70…I’m pretty certain that the total lack of life in the region will not change appreciably.
“Rational Debate says:
November 29, 2011 at 7:47 pm”
On the first graph, the variations were from 365 to 385 and on the other, there were very few spots under 365.
Check out the Keeling curve at http://en.wikipedia.org/wiki/Keeling_Curve
Note that it was about 1998 when the CO2 was 365 and what has happened temperature wise since then? Nothing, even though CO2 is about 390 now. Let us assume for a moment that these numbers represent uniform numbers. So if a change from 365 to 390 made no difference, why should a lumpiness between 365 and 390 make any difference? CO2 just does not seem to have much effect!
“davidmhoffer says:
November 29, 2011 at 8:31 pm
I haven’t read that specific paper”
See
http://wattsupwiththat.com/2011/11/09/climate-sensitivity-lowering-the-ipcc-fat-tail/
However I do not feel qualified to comment on what you have said.
re post by: Werner Brozek says: November 29, 2011 at 11:16 pm
Werner, I’m totally with you on your final conclusion. While I’ll debate various aspects and points, I’ve argued for some time now that “climate science” has never managed to get past the null hypothesis – there’s just nothing that appears to be unusual in terms of either the rate, magnitude, or length of warming since… well, pick your starting point for when mankind started contributing ‘significant’ amounts of CO2, say, 1950.
The point I’m trying to make in this discussion, however, is that claims by ‘climate scientists’ of CO2 being “well mixed” and models using this assumption basically invalidate the science. Or what they’re claiming passes as science. Sorry for the tone here, I’m just beyond utterly disgusted with the behavior of the top ‘climate scientists’ as displayed in the climategate emails. Not to mention just the massive and egregious violations of the scientific method – such as ignoring the null hypothesis. Or the degree of uncertainty. Or – or any number of various aspects that we’ve all taken issue with over the last few years. I rather cherish science in general and what it has helped us accomplish and brought into our lives – and utterly hate seeing it mangled so, and the general public so grossly misled and misinformed, with the subsequent worldwide waste of resources and damage to so many people’s lives.
davidmhoffer says:
November 27, 2011 at 2:46 pm
The source of the material is obvious if you think about two factors: Timespan – more than 10 years. Contents – both 1.0 and 2.0 tightly focused on Climate, and including many embarrassing and possibly illegal admissions. Source started with email backup server, and culled for an FOIA request or requests that were not honored.
Took both a medium level of technical expertise, and a knowledgeable editor to cull. When this wasn’t released, someone internal (IT expert, content editor or mystery person at UEA) was quite upset at the non-release, and determined that the information get out there. Either a copy of the material was made, or (my theory) the directory on which the FOIA info was stored was given internet sftp or rsync capability, possibly at the ‘guest’ level. Then the knowledge that the info was available was deliberately leaked. Alternatively, someone burned CDs or USB thumb drive copies of the material, or used a USB hard drive enclosure, etc. Many ways for the information to leave the premises, physically or electronically. Once a second copy of the FOIA existed, third or subsequent copies are easy, with no further audit trail.
But it had to be someone who knew the FOIA culled information existed and where it was kept. Thus – inside job.
The physics points to a lukewarm sensitivity, with a hard cap at 4K, but a likely range of 0K to 0.5K per CO2 doubling. I’d have to take a close look at gray body radiation, CO2 and H2O absorption vs. transparent bandwidths/frequencies, and evaporation rates and convection work done, all accurate within 2%. My WAG (wild guess) back-of-the-envelope estimate is 20% of 1.2K, or 0.24K per CO2 doubling, with the caveat that this is likely to be CO2/H2O ratio sensitive as well as temperature sensitive (slightly less than 0.24K at higher temperatures).
Primary difference is visible light absorption in the ocean, which should have lags of around 2-3 years and 850-1100 years, vs. IR absorption in the ocean, with a lag of < seconds for SST at the micron depth, and a lag of roughly 0-14 days in the atmosphere. These must be given separate partial differentials, and both sets of equations are hairy. Most of this involves both work and temperature.