The scoop behind the #ExxonKnew show trial

Right now, there’s a trial going on in New York City between the #ExxonKnew climate activist crowd, and Exxon-Mobil. According to Bloomberg, it isn’t going well for the activists, and has been reduced to a numbers game of attribution.

I recorded a podcast yesterday for Heartland with Chris Horner, who has been digging into records obtained by FOIA, which reveal a lot of unsavory shenanigans on the part of State Attorney General offices.

From Heartland:

Christopher Horner, an attorney who founded Government Accountability and Oversight and Climate Litigation Watch, joins Senior Fellow Anthony Watts for today’s podcast. He has been following the Exxon litigation with Bill McKibben and other Attorneys General have been doing to pin global warming upon the company.

The various common players in the global warming scare has been looking for an investigation into fossil fuel companies, Exxon especially. He lays out the PR strategy behind world wide student climate movements, such as those by Greta Thunberg, and the power players behind it to show it is not as organic as the media spins it. These lawsuits are made not to change the law, but the settle for huge payouts because they know Exxon and other gas companies are near-if-not-mandatory for modern life. 


51 thoughts on “The scoop behind the #ExxonKnew show trial

  1. “…trying to pin the global-warming crisis on Exxon”

    Anthony, Anthony, but there is no global warming crisis.

    Only teasing, of course. Everyone makes slips of the tongue.

    • “I have been told”
      that one third of our world is covered by water !!!
      It is possible to fly from London ( UK) to Sydney ( Aus. ) and not see land !!!
      When ” SCIENCE” can identify the life found 5km + below our oceans THEN !! and only then would I begin to think about climate change .


      It appears that the Justices of the Supreme Court of the USA are no more competent than Canada’s federally-appointed Justices, and that is not a compliment. So any of these Supreme Court have a scientific background? It appears that few if any of them do; if they did, these frivolous and vexatious cases would have been tossed out.

      In fairness, the USA has an open and transparent review process for judicial appointments, whereas in Canada that process is neither open nor transparent, and federal Judicial appointments seem to made in return for services to the ruling political party, with little requirement for legal competence.

      Regarding these lawsuits, what do the plaintiffs mean by “climate change”.

      The catastrophic human-made global warming (CAGW)” hypothesis has been falsified many ways over the decades, as further described below.

      The global warming alarmists (aka “warmists”) strategically moved their false rhetoric from “catastrophic human-made global warming” to “climate change”/”wilder weather”, which is a non-falsifiable hypothesis. “Climate change” can mean warmer/colder, wetter/drier, stormier/calmer, up/down and sideways. The warmists have deceived the gullible public, but competent scientists know that a non-falsifiable hypothesis is non-scientific nonsense – utter drivel.

      “A theory that is not refutable by any conceivable event is non-scientific.” – Karl Popper

      The best objective measure of scientific competence is one’s predictive track record. It is accurate to state that “every scary global warming/climate change prediction by the IPCC and its CAGW acolytes has proved false-to-date” – the warmists have a perfectly negative predictive track record and thus perfectly negative credibility. Nobody should believe them.

      Our predictive track record is excellent. In 2002 my co-authors Dr Sallie Baliunas, Astrophysicist, Harvard-Smithsonian, Dr Tim Patterson, Paleoclimatologist, Carleton U and I wrote:

      “Climate science does not support the theory of catastrophic human-made global warming – the alleged warming crisis does not exist.”
      The CAGW hypothesis has been falsified many ways over the decades – some of those falsifications are described here:

      “The ultimate agenda of pro-Kyoto advocates is to eliminate fossil fuels, but this would result in a catastrophic shortfall in global energy supply – the wasteful, inefficient energy solutions proposed by Kyoto advocates simply cannot replace fossil fuels.”
      Grid-connected intermittent wind and solar power have been a costly, unreliable fiasco, driving up costs, reducing grid reliability and increasing winter mortality. These “green” power systems do not even significantly reduce CO2 emissions, because of the need for ~100% conventional spinning reserve to step in when the wind doesn’t blow or the sun does not shine.

      Repeating, “climate change” is a non-falsifiable hypothesis, so by definition it is non-scientific nonsense. Catastrophic human-made global warming IS a falsifiable hypothesis, and it has been adequately falsified many ways over past decades. These cases should never have been allowed to proceed.

      Regards, Allan

  2. Anthony,
    I’m confused as no one, on our Earth, is capable of defining Climate nor its related research.

    The climate system is complex and in a constant state of change.

    What moron chooses to sue an aspect of a system no one fully understands?

      • Any evidence anyone finds a McKibben theory serious thought?

        The idea of a lawsuit based on his or any other climate change nonsense is absurd!

      • Just realized, there is a reasonable basis for an International Class Action Suit against the UNFCCC officials.

        McKibben is one of their pawns…

        • With the type of “lawfare” that these activists are engaged in, it sure seems like worldwide class action suits as mentioned, are a completely appropriate tactic to counter the activists..
          I’m not aware of anything like that ever happening before… Has it??

  3. One thing I am always curious about is how year after year it is always the hottest ever by a hundredth of a degree or two. How many temperature stations worldwide measure accurately to a hundredths of a degree?

    • Robert,
      None. But they pretend that they can obtain high accuracy by averaging together a bunch of lower accuracy measurements. As far as I can tell, the very best they could claim today is +/- 0.5C accuracy, but before the use of the automated stations, it’s probably around a full degree (think about people squinting at max/min mercury thermometers).

      • I was one of those Squinters in my senior year at Pomona College in 1959-1960. The little observatory there kept official records (max/min temperatures and rainfall) for the Weather Service, and making these measurements was among the duties for the 2 residents living rent-free at the observatory.

      • Paul Penrose

        A point I have been making for years. Then there’s the difference in height of the observer of said thermometer, the available light, the condition of the Stevenson screen, what it’s painted in and, of course, it’s location and deterioration of same relative to UHI.

        Then there’s who actually took those measurements; was it the tea boy sent out in the snow/blazing sun/typhoon to check and record the data who thought f**k that, I’m having a ciggy instead – it feels a little warmer than yesterday so I’ll add 0.5 of a degree.

        Then there’s which particular model of Stevenson screen it was, the UK version or the US version, or the much newer Australian one that is tiny by comparison and demonstrated by Jennifer Mahorasy to substantially affect temperatures which, of course, will have to be ‘adjusted’ in order that they conform to generations of other data (or perhaps the other way around).

        Then there’s questions over how recording stations were calibrated in the 19th Century before even telegraphs were available in many locations around the world.

        Nor do I think chucking canvas buckets over the side by an illiterate deckhand, to no defined depth, to gather SST’s was particularly accurate. Not that modern methods of shipboard SST’s are much better as there is evidently no international standard on size or placement of water intakes for the purposes.

        Oh!….I almost forgot. Most shipboard SST’s would have been taken along well plied trade routes. Not to many ships in the 19th or even early 20th Century would have ventured into the Southern Ocean as there was little point in doing so other than for, perhaps, whaling.

        I could go on, but I’m a layman so will probably get shot down here.

        • Prior to the 1860s wind/ sail was the prime propulsion for ships and wasn’t totally eclipsed until after WW1. These vessels didn’t have a need to know water temperature. They would follow wind patterns rather than great circle used by steam ships. So any data is of academic interest ratherthan being of any use. Like all temperature data we have only a few decades of semi-useful data.

        • HotScot,
          As one who spent several years on Scottish registered “weather reporting ships”, you are mostly right, except that the bucket would take surface water and the deckhand would be supervised by an officer.
          So no shooting down.

          • Oldseadog

            Presumably that would be in the 1960’s/70’s perhaps?

            Around when the world was getting an idea of what was going on, or not.

            I’m principally talking about SST’s and weather data collection well before that. It wasn’t until around WW2 that the US Navy started to take SST’s seriously.

            Mind you, SST’s taken from the top foot or more of of sea water is affected by the sun I understand, so is unrepresentative.

      • A couple of threads have touched on this lately but it hasn’t been dealt with comprehensively.

        Accuracy, precision, uncertainty have all been pretty much ignored in the race to find temperature data that will allow GCM’s to predict warming, even out to one hundredth of a degree. The use of significant figures has been ignored in the race to obtain false precision. This is not only unscientific but unethical to say the least.

        The very statistics of the Law of Large Numbers and Central Limit Theory have been misapplied because it is assumed that temperature readings from diverse locations are part of the same population, when in reality they are not. Neither are they multiple readings of the same thing by the same device that are needed to statistically calculate a “true value”.

        Uncertainty budgets as specified in numerous official documents describing the correct way to handle measurements are absolutely ignored. Calibration accuracy, recording precision, systemic errors, and random errors are never analyzed or reported. Temperature recordings are basically treated as counted numbers and accurate to the nth degree.

        When you see these kinds of mentions, you can be sure the people creating them are mathematicians and computer programmers who look at data as absolutely accurate counted numbers.

        I’ll bet most of them continue to use the traditional rounding methods which bias data to the high side rather than the method of using even/odd preceding number to determine when to round when the rounding digit is a 5.

        • Jim Gorman,
          You seem to have studied the problem. Can you please volunteer to tell me what you understand to be the implications in this email from the Australian Bureau of Meteorology, BOM.
          11 April 2019 Dear Mr Sherrington,
          Thank you for your correspondence dated 1 April 2019 and apologies for delays in responding.
          Dr Rea has asked me to respond to your query on his behalf, as he is away from the office at this time.
          The answer to your question regarding uncertainty is not trivial. As such, our response needs to consider the context of “values of X dissected into components like adjustment uncertainty, representative error, or values used in area-averaged mapping” to address your question.
          Measurement uncertainty is the outcome of the application of a measurement model to a specific problem or process. The mathematical model then defines the expected range within which the measured quantity is expected to fall, at a defined level of confidence. The value derived from this process is dependent on the information being sought from the measurement data. The Bureau is drafting a report that describes the models for temperature measurement, the scope of application and the contributing sources and magnitudes to the estimates of uncertainty. This report will be available in due course.
          While the report is in development, the most relevant figure we can supply to meet your request for a “T +/- X degrees C” is our specified inspection threshold. This is not an estimate of the uncertainty of the “full uncertainty numbers for historic temperature measurements for all stations in the ACORN_SAT group”. The inspection threshold is the value used during verification of sensor performance in the field to determine if there is an issue with the measurement chain, be it the sensor or the measurement electronics. The inspection involves comparison of the fielded sensor against a transfer standard, in the screen and in thermal contact with the fielded sensor. If the difference in the temperature measured by the two instruments is greater than +/- 0.3°C, then the sensor is replaced. The test is conducted both as an “on arrival” and “on departure/replacement” test.
          In 2016, an analysis of these records was presented at the WMO TECO16 meeting in Madrid. This presentation demonstrated that for comparisons from 1990 to 2013 at all sites, the bias was 0.02 +/- 0.01°C and that 5.6% of the before tests and 3.7% of the after tests registered inspection differences greater than +/- 0.3°C. The same analysis on only the ACORN-SAT sites demonstrated that only 2.1% of the inspection differences were greater than +/- 0.3°C. The results provide confidence that the temperatures measured at ACORN-SAT sites in the field are conservatively within +/- 0.3°C. However, it needs to be stressed that this value is not the uncertainty of the ACORN-SAT network’s temperature measurements in the field.
          Pending further analysis, it is expected that the uncertainty of a single observation at a single location will be less than the inspection threshold provided in this letter. It is important to note that the inspection threshold and the pending (single instrument, single measurement) field uncertainty are not the same as the uncertainty for temperature products created from network averages of measurements spread out over a wide area and covering a long-time series. Such statistical measurement products fall under the science of homogenisation.
          Regarding historical temperature measurements, you might be aware that in 1992 the International Organization for Standardization (ISO) released their Guide to the Expression of Uncertainty in Measurement (GUM). This document provided a rigorous, uniform and internationally consistent approach to the assessment of uncertainty in any measurement. After its release, the Bureau adopted the approach recommended in the GUM for calibration uncertainty of its surface measurements. Alignment of uncertainty estimates before the 1990s with the GUM requires the evaluation of primary source material. It will, therefore, take time to provide you with compatible “T +/- X degrees C” for older records.
          Finally, as mentioned in Dr Rea’s earlier correspondence to you, dated 28 November 2018, we are continuing to prepare a number of publications relevant to this topic, all of which will be released in due course.

          • My first thought when I read this was what were they using as the “transfer standard”. What is its accuracy when transported and operated in the field? The uncertainty in the standard could easily be larger than that of the sensors.

            One can’t tell from the email if they expect the “transfer standard” to be absolutely accurate or if the +/- 0.03 C is the uncertainty of the standard. If this is the uncertainty of the standard, then the process allows the sensor to add an additional +/- 0.3 C uncertainty. In other words, the standard could be at + 0.03 and then the measurement could allow +0.03 on top of that for a total of +/- 0.6 C uncertainty. If the uncertainty of the “transfer standard” is assumed to be “0” then they are only kidding themselves.

            The statement of 0.02 +/- 0.01 C makes me cringe. To get these figures directly from measurements would require an instrument capable of accurately measuring temperatures out to one thousandth of a degree – in the field! If this is true, I really, really want to know what their “transfer standard” was and how they preserved this accuracy. If, as the email states, they measured to the nearest one tenth of degree, then they have added at least one digit of precision, probably through averaging. This violates everything scientists learn about significant digits when measuring.

            My guess is that the 0.02 +/- 0.01 is an “uncertainty of the mean” obtained from averaging all the measurements. All the uncertainty of the mean really tells you is how well values above the mean are offset by values below the mean. It is used when averaging the individual means of a number of independent samples from a population. The assumption is that each sample has approximately the same distribution as the whole population.

            I don’t believe this is a valid assumption when averaging single measurements from individual temperature sensors. Why? The mean of a single measurement (i.e., a sample) is the measure of itself. One should expect the mean of all the single measurement samples to be in the center of the whole distribution. In essence, the uncertainty of the mean in this case should be very close to 0.

            Statistically that is meaningless because they have already stated a +/- 0.3 uncertainty is allowed. If the number is supposed to be a standard deviation of 0.02, you would expect about 95% (2 sigma’s) of the readings to be within about 0.04 of the mean. This doesn’t come close to lining up with their % quote about how many sensors missed the mark of 0.3.

            I believe the standard deviation and variance of the whole distribution would be much more meaningful in this situation. Too bad they didn’t include it.

            Ultimately, this email doesn’t give one much to rely on. If you’ve dealt with measuring instruments then you know how hard it is to calibrate one and keep it that way. In addition, the calibration is normally done under controlled conditions of temperature, humidity, air movement, pressure, etc. Doing work in the field would seldom meet these requirements. Normally one would generate “calibration” curves that one could use to adjust measurements made in the field. For example, different sheets for various wind speeds. Each sheet would have curves for varying temperature and humidity.

            None of this is discussed in the email, leaving one to wonder about what was done and how. Way too much statistical inference and not enough plain old hands on.

    • Robert, Robert, Robert. You simply do not understand basic climate mathematics. (It’s like Bistromaths but much more advanced) If you have 1,000 thermometers, each accurate to .5 degrees, then you know the temperature with an accuracy of 1/5,000 th of a degree.

      Really, it’s so simple that even a kindergarten child could think of it. (And probably did.)

      • Exactly.
        Climate ‘science’ apparently has its own special statistical dispensation when it comes to matters of statistical significance.

      • That’s 1000 thermometers measuring the exact same spot. Unfortunately that is physically impossible, even with integrated circuits. Just as a rough estimate it would take 650 thermocouples arranged in a sphere all equidistant from each other and evenly spaced.

        Good luck with that.

        Not to mention that averaging temperatures is itself rather meaningless, except for the daily weather forecast.

  4. Its not Noble Cause Corruption IMO.
    It is purely greed on the part of the PredaTort lawyers.
    And as Chris, XOM is the golden golden to be squeezed.
    30% – 40% of $200Billion is a lot of new BMWs, 2nd and 3rd vacation homes, and other goodies/riches for the private attorney-partners running the shakedown.

    This is the primary reason the predatort ambulance chasers support Democrats of course. All these #ExxonKnew participating state AGs are dirty, backroom dealing Democrats. They are Deeply in the pockets of folks like Steyer, Bloomberg, the Rockefellers. It is also what motivates Senator Pocahontas and gets her major support from the predatort industry as a presidential candidate for 2020. Getting Pocahontas in elected would effectively put one of their own running the executive branch of the US government.

    • errata: “And as Chris said, XOM is the golden golden goose to be squeezed.

      The aim is to provide the “gold” for the tort lawyers and their GreenSlime investors (the Rockefeller’s, Tom Steyer, Bloomberg, Soros, and a host of other investors in the Climate hustle). The state AG know then a part of their money will be given to the as campaign kickbacks down the road when the run for Governor, Senator, President etc.

  5. So Exxon knew what climate sensitivity to a doubling of CO2 was way back when?

    How would that be possible given that no one today knows what it is?

    • That not quite the correct message.

      In the tobacco case, Big Tobacco scientists knew in the 1950-1960’s that heavy smoking had very serious health consequences BUT they hid the research and paid for research that showed no health effects from smoking.

      Here we have a case where Exxon knew everything that was publicly available. They didn’t “hide” anything. They only knew what everyone else at the time knew. There was no secret research funded by Esso, Mobile (as they were then). There was effort to conceal “climate harm” from CO2.

      This is just a pure and simple “Shakedown” as Chris and Anthony called it. The predatort bar, the Democrat state AGs are all hoping for a settlement and Big Tobacco-style lottery payout paid for by you and me with higher energy costs. That the people this would hurt the most are, as Chris pointed out, lower income Americans and retirees living on fixed incomes with limited “extra income” to pay the lawyers at the predatort firms. Typically partners already pulling down high 6 figure and 7figure annual incomes. They don’t care if gas cost $7 – $8 dollars gallon. That the Democrats are doing this is patently immoral and exposes their craven greed, but they don’t care. They hide behind the fake morality of Saving the Planet. IF it was about saving the planet, the Democrats would be doing much more to stop the real emissions growth, China and India, and pushing nuclear power. That they do neither of those things, again shows what their true agenda is.

      It’s ALL for the money – driven by Democrats’ and their GreenSlime backers’ greed for more wealth and their lust for total power.

      • errata: “There was NO effort to conceal “climate harm” from CO2 (by Big Oil).

        It is that realization that this case has devolved down from from lofty “they hid research” to Exxon merely misled shareholders by not fully disclosing the risk of stranded assets due to climate change policies that might be imposed in the future. Total BS of course because no one can predict what political changes in 20 years might mean for the laws that alter the economy or an asset’s value if it is left stranded by policy/statute.

  6. The Big Clime Syndicate in action. The more you look, and the deeper you dig, the more corruption, lies, and conspiracy to commit what amounts to extortion you see. It’s disgusting beyond belief.

    • Rumour has it that Michael Moore discovered the coruption and deceit of the Climate Movement unintentionally while making a film to expose the same in its opponents.

  7. Whatever Exxon knew is certainly less than “all there is to know about climate”. A new analysis of radiosonde data shows there is no greenhouse effect in our atmosphere. See ( ) at 1Hr-01 Min for their conclusions. These include that the IPCC was wrong to conclude that recent climate changes were due to greenhouse gasses and current climate model projections are worthless.
    If this is the truth , and I think the data analysis is solid enough to say it is, what Exxon knew was wrong just like all the others including the IPCC that have based their statements on the now falsified hypothesis.

  8. One must admire, and at the same time be in fearful awe of, the propaganda machine that has successfully convinced so many useful idiots and their followers to back a cause that cannot be proven.

    • When you’re a multi-billionaire with billions of your dollars invested in Green Schemes, schemes which are dependent on rapidly higher fossil fuel prices to drive up their value, what is a few 100 million dollars to fund a sophisticated propaganda machine and also political campaign pay-offs to Democrat politicians?

      The Green Propaganda machine is merely a business cost to the GreenSlime, a marketing cost, a necessary advertising bill to be paid.
      They are billionaires looking for huge pay-offs in the decades to come from Western economies and a hollowed-out middle class hobbled with energy poverty and paying huge sums for scarce electricity to the early investors in the WindMill hustle.

    • It makes you think back to the two world wars and cold war that Einstein and his generation went through in Europe with bad ideas building momentum to such crests that the masses were thrown into the fire with glee. That must have really been something for all involved and it took quite a lot of participants to distort reality to that extent. Who knew that a few coffee shops in Vienna would host so many bad ideas at the same time.

  9. Who knew the butterfly effect of turning up the thermostat in a Congressional hearing room in 1988 would lead to this? I guess the conspirators knew.

  10. This hustle to shake down big oil should have little to zero chance of succeeding. At best, no one can prove that any man made warming hasn’t been at least 50% beneficial and at worst no one can prove that any of this AGW warming is at least 50% negative. There is just no way to prove it, although if you really understand all the plus side of warming that it is mainly good, then maybe it is 60-40 good, or even 80-20 good. Other than real pollution, I just don’t see how any modest warming has been all that bad for the planet and its inhabitants.

    One thing is certain though and that is as a species on the planet that has increased from less than 1 billion to 7.6 billion and due in part to natural warming that began at the end of the LIA circa 1850, that fossil fuels have been responsible for the majority of the advances that we earthlings have made the last 170+ years. Why don’t they go sue McDonalds if they want to cast blame and bankrupt someone? At least they would have a more logical case and have some better facts to deal with. That day is coming soon as well.

  11. Another point I want to make on Noble Cause Corruption (NCC) that too many people mess up and use it incorrectly.

    Real NCC is a Robin Hood-style operator, taking from the rich (haves) to give to the poor (have-nots) without concern for themselves if they get caught.

    A modern example (hypothetical) of Real NCC might be:
    A Pharmacist skims very high priced anti-cancer or diabetes drugs from the inventory (inventory being sold to people with money and insurance). He/she skims over time that a small % getting skimmed every week can be covered-up/not missed in the large out-going inventory. And then the “Robin Hood pharmacist” taking those skimmed drugs to give to poor people and un/under-insured in his/her community. That would be a true case of Real NCC.

    An example of False NCC: But now if the pharmacist started skimming opoids (Oxycontin and Percoset say) from the inventory and selling them to opiod addicts desperate for the drugs, then He/She might claim its NCC, but in reality it is personal greed that then tries to hide behind NCC when they are caught. A Fake morality. A false claim of NCC.

    This 2nd example is the situation we have here with these #ExxonKnew shakedowns by those state AGs and their GreenSlime backers. It’s a fake morality rip-off being run by the plaintiffs. They claim it’s about helping the climate and thus people, when it’s really about enriching themselves and their special interest pals to the detriment of society.
    Because as Chris pointed out, Exxon-Mobil and all the defendants would merely pass-on the settlement costs to the consumers of their products.

    So seen in proper context then, the #ExxonKnew litigation from these plaintiffs is an attempt at taking from the Have-nots (poor, elderly on fixed incomes) and giving to the wealthy (i.e. the predatort ambulance chasers making high-6 and 7 figure annual salaries). This is a fake NCC.

  12. Oh how I wish Exxon could somehow orchestrate a complete cut-off at the gas pump, for the duration of the trial, and if Exxon loses or settles, then forever, for every voter from a state with an AG participating in this fiasco.

    That’s my idea of Utopia.

    Short of that, I hope Exxon will not let this slide, and fight it to the bitter end, and counter sue for legal expenses and punitive damages.

    • These plaintiffs are in for a generational fight having sued EM. Apparently they don’t remember the Valdez litigation. That fight lasted 20 years. When the dramatically reduced jury award on punitive damages was finally paid by EM after its successful appeal to the SCOTUS, almost all the original plaintiffs were dead. I expect a similar path for this litigation even assuming the plaintiffs are successful at the trial court level.

  13. If memory serves me well. a lieutenant on a vessel was the first to acknowledge the El Niño effect –

    So with the same as it ever was small-fallings the work of the sailors

    gave balanced results which probably were the first proof: yes, El Niño effects can be seen.

Comments are closed.