“USGS Gets Politics Out of Climate Forecasts”

Guest “another thing lost in the November coup” by David Middleton

USGS Director Jim Reilly authored a recent opinion piece in the Wall Street Journal

USGS Gets Politics Out of Climate Forecasts
My agency makes a significant advance in the government’s approach to science.

By Jim Reilly
Dec. 21, 2020

The world’s climate is changing, as it always has. The challenge is to understand how and why, which is why the U.S. Geological Survey has adopted the most comprehensive climate analysis requirements ever implemented by the federal government.

Forecasting future responses and impacts for a system as complex as the Earth is difficult and uncertain.

[…]

The U.S. Geological Survey is at the forefront of climate science for the federal government. USGS’s chief scientist, Geoffrey Plumlee, and other career scientists recently published a report, “Using Information From Global Climate Models to Inform Policymaking—the Role of the U.S. Geological Survey,” which outlines a broad, consistent and empirical approach for analyzing climate change conditions.

The approach includes evaluating the full range of projected climate outcomes, making available the data used in developing forecasts, describing the level of uncertainty in the findings, and periodically assessing past expectations against actual performance to provide guidance on future projections.

[…]

Moving forward, this logical approach will be used by the USGS and the Interior Department for all climate-related analysis and research—a significant advancement in the government’s use and presentation of climate science.

These requirements may seem like common sense, but there has been wide latitude in how climate assessments have been used in the past. This new approach will improve scientific efficacy and provide a higher degree of confidence for policy makers responding to potential future climate change conditions because a full range of plausible outcomes will be considered.

Science should never be political. We shouldn’t treat the most extreme forecasts as an inevitable future apocalypse. The full array of forecasts of climate models should be considered.

[…]

Mr. Reilly is a geologist, a former astronaut and director of the U.S. Geological Survey.

Wall Street Journal

I worked with Jim Reilly at Enserch Exploration in 1980’s and early 1990’s before he was selected for NASA’s astronaut program in December 1994. It’s interesting to note that Jim is never referred to as “Dr. Reilly,” despite having a PhD in geosciences from the University of Texas at Dallas. This is actually proper. Apart from MD, DVM, DDS and other medical field doctors, PhD, EdD, etc. doctors would only be addressed as “Dr.” in formal settings, like a classroom… But I digress.

Jim cites a recent USGS publication which makes the case that the full range of model outcomes, along with a reasonable assessment of uncertainty, need to be made clear to policymakers. This science-based approach to climate policymaking might have actually gained traction if not for the November coup d’état… (I don’t give a rat’s @$$ if anyone reading this objects to this phrase). The paper, Terando et al., 2020, is well-worth reading. It features a variation of one of my favorite climate models.

Figure 1. Modeled human plus natural climate forcing compared to three instrumental records (see Terando for specifics)
Figure 2. Modeled human climate forcing compared to three instrumental records (see Terando for specifics)

If the models are reasonably accurate, the early 20th century warming can be explained by natural forcing mechanisms. Whereas, some or all of the warming since about 1975 cannot be explained by natural forcing mechanisms alone. That said, the models only incorporate known, reasonably well-understood, forcing mechanisms. Judith Curry illustrated this concept quite well…

Figure 3. You only find what you’re looking for. (JC at the National Press Club)

Setting aside the unknown and/or poorly understood natural forcing mechanisms, not incorporated in the model, we have two very similar warming episodes, one explained by natural factors and one requiring human input.

Figure 4. HadCRUT4 1904-Present.

Let’s assume arguendo that all of the warming since 1975 is due to anthropogenic greenhouse gas emissions. What would this mean?

It would mean that the rise in atmospheric CO2 from ~280 to ~400 ppm caused 0.8 °C of warming. Recent instrumental observation-derived climate sensitivity estimates indicate an equilibrium climate sensitivity (ECS) of about 2.3 °C per doubling of atmospheric CO2, equating to a transient climate response (TCR) of about 1.6 °C per doubling of atmospheric CO2. Oddly enough, with a TCR of 1.6 °C, we would expect to see 0.8 °C of warming at 400 ppm CO2.

Figure 5. Expected warming with a TCR of 1.6 °C.

Even more oddly (I am being very sarcastic), this is consistent with the climate behaving much closer to the bottom of the model uncertainty range than to the top (which is often described as “business as usual”).

Figure 6. Models vs observations (Climate Lab Book).

It’s also important to note that the 0.8 °C of allegedly anthropogenic warming started here:

Figure 7. Context.

A science-based approach to climate change would indicate that humans are having some effect on climate, that it doesn’t appear to be a crisis and that to the extent it might be a long-term problem, reasonable, economically viable steps could be taken now (natural gas to nuclear, N2N), to help ensure that it never escalates beyond a potential long-term problem.

Unfortunately, the incoming Harris-Biden Dominion have indicated a desire to “clean house” at the Department of the Interior, which they view as too friendly to the fossil fuel industries. If you thought 2020 was a total schist show, 2021-2024 promises to be a lot worse… Happy New Year!

Reference

Terando, A., Reidmiller, D., Hostetler, S.W., Littell, J.S., Beard, T.D., Jr., Weiskopf, S.R., Belnap, J., and Plumlee, G.S., 2020, Using information from global climate models to inform policymaking—The role of the U.S. Geological Survey: U.S. Geological Survey Open-File Report 2020–1058, 25 p.,
https://doi.org/10.3133/ofr20201058.

4.6 21 votes
Article Rating
277 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Geoff Sherrington
January 1, 2021 2:33 am

From their graphs, they still cling to the stupid, unproven claim of being able to measure these temperatures to better than +/- 0.5 deg C before the year 1900.
This is fantasy. I doubt that they will ever get scientific enough to estimate uncertainty in a realistic, agreed and reproducible manner. It is all in the text books.
Even using their graphs above, there is uncertainty overlap since year 2000, of the alleged man-made and total temperature changes. This means that the effect driving them cannor be separated. If there is a difference, we cannot know what causes it.
There is no way yet known to separate man-made from natural T change.
There is no accepted value of sensitivity relating temperature to CO2 in the air. Even the sign is not settled. Even zero is not excluded.
The CO2 level did not drop, it seems, after a large Covid-induced drop in estimated global emissions. Why not?
CO2 is near-saturated in its ability to react to temperature change with increasing CO2 concentration.

These matters are all widely known in science, yet none has been resolved to fundamental agreement after decades of research.

The ostrich can keep its head in the sand only so long before it dies. I do not want to see geology die because of juvenile or willful mistreatment. So, Dr Reilly, what are you going to do about a required approach to better answers? Geoff S (Scientist, Geochemist, retired).

RockyRoad
Reply to  David Middleton
January 1, 2021 5:24 am

We’d tolrrate looking at them one at a time. We’re patient people.

Pat Frank
Reply to  David Middleton
January 1, 2021 11:49 am

Plotting the uncertainties would make the graph show the truth that nobody knows what they’re talking about.

Patrick B
Reply to  David Middleton
January 1, 2021 3:49 pm

As I have said several times it should be a standard convention on any graph with model data to draw a bright vertical line for the year the model was run. This would highlight what period the model could be tuned to match and what period was forecasting.

Tom Abbott
Reply to  David Middleton
January 1, 2021 7:43 pm

“Observed” temperatures before 1979 are bastardized temperatures. You can’t derive anything scientific from made up tempeature figures. All you get is more BS.

Why do you insist on continuing to treat the bogus, bastaradized Hockey Stick charts as legitimate?

Do you see a fit between your ECS estimates and the anthromorphic warming since 1979? Maybe you are seeing what you want to see. I would say you are, seeing as how all regional surface temperature charts tell a completely different story than the bogus Hockey Stick charts. They show there is no warming today compared with the Early Twentieth Century.

Btw, one of our Climategate Charlatans says the warming from 1910 to 1940 is statistically indistinguishable from the warming from 1980 to the present. You show a difference of 0.2C.

I don’t mean to be harsh, but for the life of me, I can’t see how a well-informed person could think the bogus Hockey Sticks are good for anything.

They are made up figures that promote a political agenda, and here you are helping them out.

I don’t get it.

How about doing some of your comparisons using regional surface temperature charts? That’s what you ought to be doing to be accurate.

Jeff Alberts
Reply to  Tom Abbott
January 2, 2021 7:50 pm

“Do you see a fit between your ECS estimates and the anthromorphic warming since 1979?”

Anthropomorphic? Is warming wearing glasses and drinking a Coke or something?

Tom Abbott
Reply to  Jeff Alberts
January 4, 2021 4:03 am

Warming just has fumble fingers sometimes.

Jeff Alberts
Reply to  Tom Abbott
January 5, 2021 8:37 pm

Doh!

Tom Abbott
Reply to  Tom Abbott
January 4, 2021 4:02 am

Well, after reading all the comments in this thread, I think I now get it, David.

You actually believe that CO2 causes substantial warming in the atmosphere, as you claim you have seen “conclusively” in the ice core record.

So you are a lukewarmer. Now it all makes sense.

Vuk
Reply to  Geoff Sherrington
January 1, 2021 4:48 am

in 1971 scientists said: ‘6 degrees of cooling expected in 50 years time’.comment image

Redge
Reply to  Vuk
January 1, 2021 5:07 am

The paper abstract reads:

Effects on the global temperature of large increases in carbon dioxide and aerosol densities in the atmosphere of Earth have been computed. It is found that, although the addition of carbon dioxide in the atmosphere does increase the surface temperature, the rate of temperature increase diminishes with increasing carbon dioxide in the atmosphere. For aerosols, however, the net effect of increase in density is to reduce the surface temperature of Earth. Because of the exponential dependence of the backscattering, the rate of temperature decrease is augmented with increasing aerosol content. An increase by only a factor of 4 in global aerosol background concentration may be sufficient to reduce the surface temperature by as much as 3.5 ° K. If sustained over a period of several years, such a temperature decrease over the whole globe is believed to be sufficient to trigger an ice age.

Atmospheric Carbon Dioxide and Aerosols: Effects of Large Increases on Global Climate

Vuk
Reply to  Redge
January 1, 2021 5:54 am

Perhaps someone should tell that to Mr. Bill Gates

Redge
Reply to  Vuk
January 1, 2021 6:24 am

The interesting thing is the paper has not been retracted despite unsubstantiated claims by warmists that Schneider retracted the paper in a personal communication.

Tombstone Gabby
Reply to  Redge
January 1, 2021 7:18 pm

Speaking of Schneider, he had a young associate calculate the effect of a four-fold increase in aerosols. The answer, “Ice Age”. That youngster, little Jimmy Hansen.

It really is a small world.

Jeff Alberts
Reply to  Vuk
January 2, 2021 7:52 pm

in 1971 scientists said: ‘6 degrees of cooling expected in 50 years time’.”

You say “scientists”, the headline says “scientist”. This is known as hyperbole.

Jim Gorman
Reply to  Geoff Sherrington
January 1, 2021 4:58 am

The statistics of determining a Global Average Temperature are so screwed up it should be totally disregarded. There are numerous failures of using data that do not meet the requirements of geomatics as to stationarity and distribution of data. One of the common refrains is that temps are correlated out to 1500 km. What bull hockey. Yeah, they are probably correlated to an extent but not sufficiently to justify estimations out to two decimal points. The correlations would barely support integer values at that distance.

The research that another fellow and I have done seem to point to the conclusion that when temps are trended at local and regional levels there simply are not enough areas with very high temp increases to offset those with little to none cooling. This is evidence that the method of estimating the GAT is flawed from the very start. It certainly is evidence that the trending being done is not examining the statistical underpinnings of the theories being proposed.

Clyde Spencer
Reply to  Jim Gorman
January 1, 2021 11:39 am

Jim
What do you suppose the chances are that the Biden administration will totally disregard the problems with the GAT?

Jeff Alberts
Reply to  Jim Gorman
January 2, 2021 11:28 pm

Jim, the easier answer is “intensive properties”. There is no GAT.

Shanghai Dan
Reply to  Geoff Sherrington
January 1, 2021 9:18 am

One of the first things drilled into any new engineer or scientist (at least when I went to college) was that the tolerance of the instrument was the tolerance of any resulting measurement OR SET OF MEASUREMENTS.

If my voltmeter was accurate to 100 mV, and I measured a 9V battery 100 times, I could claim the average voltage may be 8.995V – but it would properly be called 8.995V +/- 0.1V

You don’t get to drop your tolerance by averaging; you only get a more precise “average” value with the same, base tolerance of the instrument.

bigoilbob
Reply to  Shanghai Dan
January 1, 2021 2:47 pm

Tell me more about this voltmeter. If it consistently misses the mark by either over or under the correct voltage, by 100 mV, then I agree with you. It needs calibration. But if repeated readings give a range around the correct reading of 100 mV, then the Central Limit Theorem applies. It would also apply if we measured this voltage with 100 volt meters that needed calibration, but had miscalibrations that converged on the correct voltage.

Carlo, Monte
Reply to  bigoilbob
January 1, 2021 4:06 pm

The CTL only applies if you are making multiple measurements of the same measurand, which is certainly not true of global temperature “averages”.

Jim Gorman
Reply to  bigoilbob
January 1, 2021 4:21 pm

You need to review your understandings.

First, in order to match the original calibration, measurements need to be done with the same environmental conditions as when it was calibrated. Most decent providers provide a calibration chart that can help in modifying the actual reading to a more accurate reading but there are limits.

Second, again most decent providers will provide precision ranges that take into account the acceptable change during the calibration period. You may get a reading that is bang on the same day as it was calibrated, but 10 months down the road that won’t be the case. Good instrument designers and providers take this into account when making their specifications.

Third, the CTL only applies if the “errors” in the 100 readings fall into a normal or Gaussian distribution. This means making a frequency distribution to determine if the distribution is normal. Only then can you assume the mean or average is the “true value” of the readings taken by that instrument. Also, the “true value” has no relation to either accuracy or precision. It really just means there were offsetting readings above and below the mean (true value).

Fourth, you need to go back to metrology school. You CAN NOT increase either accuracy or precision by using differing measuring devices. Random errors can only be generated by using the same device to measure the same thing multiple times. Read the GUM if you need an education. More climate scientists need to do that because too many of them make the same mistake. You can not average temps from different stations and claim that random errors let you get more accuracy and precision.

bigoilbob
Reply to  Jim Gorman
January 1, 2021 6:17 pm

Third, the CTL only applies if the “errors” in the 100 readings fall into a normal or Gaussian distribution.”

Nope. https://statisticsbyjim.com/basics/central-limit-theorem/#:~:text=The%20central%20limit%20theorem%20applies,must%20have%20a%20finite%20variance.

“You CAN NOT increase either accuracy or precision by using differing measuring devices. Random errors can only be generated by using the same device to measure the same thing multiple times.”

You need to pass on this undocumented, revealed truth to David Middleton/Andy May. If they’ve been truthin’ about their livelihoods, they’ve spent decades gathering data from multiple devices, from multiple companies, over months/years, to estimate rock and fluid properties for oil and gas reservoirs. They are passed on to us, where we spatially distribute them (with the concomitant probability distributions), properly correlate them, fit the results into cells, run multiple realizations, and use the outputs to economically model development plans. We obviously update based on results. But these MODELING processes work out often enough that they pay us like MD’s.

FYI, our measurement processes are WAY less reliable than climate measurements….

Geoff Sherrington
Reply to  bigoilbob
January 2, 2021 2:03 am

bigoilbob,
Have you ever thought of this simple way to error?
The old maximum recording thermometers had a metal peg above the mercury column, for resetting to a low temperature after the reading is taken. If you did a probability distribution of noise readings a few minutes each side of the maximum, it would be one-sided because there would be no readings higher than that daily max. This contrasts with newer devices like Pt resistance thermometers, that can record the voltages from temperatures without the barrier of not going beyond the max. So any noise in those similar few minutes is two-sided – then an average is taken that ceteris paribus, will be higher than the comparison mercury Tmax thermometer.
The concept is a bit similar to temperatures taken in near-boiling to boiling water. The noise distribution is one-sided because the physics stops the T from going over 100C. Geoff S

Jim Gorman
Reply to  bigoilbob
January 2, 2021 4:32 am

Big –> “Nope. https://statisticsbyjim.com/basics/central-limit-theorem/#:~:text=The%20central%20limit%20theorem%20applies,must%20have%20a%20finite%20variance.”

Why did I think you would respond with something stupid that does not apply. Your referenced document has the following: “Additionally, the central limit theorem applies to independent, identically distributed variables. In other words, the value of one observation does not depend on the value of another observation. And, the distribution of that variable must remain constant across all measurements.

Do you understand what this means? Let’s isolate even further: “In other words, the value of one observation does not depend on the value of another observation.” This means the CTL does not apply when you are measuring one thing. Each measurement of the same thing DOES depend upon previous readings. Each reading is the “true value +/- error value”, i.e. so the CTL does not apply. This means your distribution must be normal or Gaussian in order for the mean not to include an error component. Each +error must have an offsetting -error, in other words, a normal distribution.

I’ll elucidate a little further. This is from: Error and Uncertainty (ou.edu)

“Random error
The random error in experimental results is due to lack of observer precision, perhaps in misreading an analogue scale due to parallax. This will result in a spread of results, even in the most carefully designed of experiments. Due to the random nature of these errors, there is an equal chance that they will be above or below the ‘true’ value. To mitigate against such errors, it is correct technique to take many readings and find the mean, even in the simplest of experiments. Because it is impossible to know the ‘true’ value, the best estimate is the mean of repeat readings.

The random error (also called the mean deviation) is then a measure of the spread of the repeat readings:

Random error, ∆ran = R/N
R = range (maximum – minimum)
N = number of repeat readings

Random error is reduced by increasing the number of readings, N. As N increases ∆ran decreases.

The spread of experimental results about the mean value follows a normal distribution curve.”

Notice that last sentence, it is important. You need to remember when you see the term “random variable”, you are describing a normal distribution around a mean.

As to your last paragraph, I have mentioned to Dave in another post that the whole field of geomatics, which you are describing, relies on a time invariant static distribution. Oil fields and other mineral deposits don’t generally move around absent some serious disturbance. However contour maps of temps, winds, pressures, etc. can vary within minutes, if not seconds. So any estimates using geomatics are fleeting.

Generally, measurements are not stable. When you forge a part, the measurement is done at one moment in time and then passed to the customer. Quality products don’t generally allow for future revisions to meet specified criteria. Have you heard the term, right the first time? Quality assurance lives by this. Measurements are crucial. Carpenters live and die by this. The last time I saw a board strecher was NEVER!

The whole field of metrology is designed to provide quality measurements. You need to learn some metrology. Here are two references that explain metrology in some detail. Microsoft Word – JCGM_100_1995_E.doc (isobudgets.com) and a book, “An Introduction to Error Analysis” by John R. Taylor.

bigoilbob
Reply to  Jim Gorman
January 2, 2021 8:00 am

Each measurement of the same thing DOES depend upon previous readings”

Wut? Where? When? As in, nope. We are discussing just the kind of measurements that ARE subject to the CLT.

Alt. world in action, folks. And FYI, please read David Middleton’s description of some relevant oil and gas work processes to definitively rebut this nonsense….

Tim Gorman
Reply to  bigoilbob
January 2, 2021 8:33 am

bob,

Did you not read the following statement?

Oil fields and other mineral deposits don’t generally move around absent some serious disturbance. However contour maps of temps, winds, pressures, etc. can vary within minutes”

Oil and gas measurements are measuring the same thing, day after day, week after week, and year after year. Oil and gas deposits don’t change over these time periods.

Temperatures do!

Thus you are trying to compare apples and oranges. It demonstrates a complete lack of knowledge of the physical world.

Tim Gorman
Reply to  David Middleton
January 3, 2021 9:49 am

How far has the Ogallala Reservoir moved over the past 500 years?

How far has the Houston-Oklahoma oil field move in the past 500 years?

Most geological fluid contacts don’t move in less than geological time frames. There is no fluid flow to cause erosion which rivers have which causes the river bed to move.

Tim Gorman
Reply to  David Middleton
January 3, 2021 10:37 am

And you didn’t answer the question! Oil fields and aquifers may empty over time as they are pumped out. Thus certain wells along the edge of each may go dry. But the location of the field or aquifer does NOT move.

Tim Gorman
Reply to  David Middleton
January 4, 2021 4:59 am

Really? Asking how often oil fields or water aquifers move location doesn’t make any sense?

You were trying to claim that fluid contacts move so taking subsequent readings are like taking subsequent temperature readings in a changing environment.

So asking how often oil fields and aquifers move is a perfectly valid follow up question.

Which you failed to answer. Why?

bigoilbob
Reply to  David Middleton
January 2, 2021 8:51 am

Sorry, no “apples and oranges”, here, Tim. Oil and gas evaluative work processes involve all of the things you list as unevaluable and incomparable. Different instruments, different times, different measurement techniques. Yet somehow, we manage, with basic data MUCH less reliable than climate data.

Sounds like you agree, David. In fact. please rebut ANY oil and gas comments I’ve made in this fora. We disagree on AGW, but your doubts about my oilfield trash bona fides need more meat than your fact free “coup” opines…

Tim Gorman
Reply to  bigoilbob
January 3, 2021 9:59 am

Yes, apples and oranges. I think Jim did a pretty good job of explaining why. When you take soundings of geological attributes, they don’t change much if at all. How much movement has the Ogallala Aquifer seen over the past 500 years?

Jim Gorman
Reply to  bigoilbob
January 3, 2021 1:04 pm

Now you’re being a troll by making a Straw Man argument. I have said nothing about the data you use or how you process it. I have said that what you are measuring is stationary as a time series.

Temp measurements are not stationary. Santa Barbara’s temp data series has totally different variance, means, ranges, etc. from Tulsa, Oklahoma. You can’t just average the two even using anomalies without taking this into account. That doesn’t even address the ending uncertainty in measurement. The uncertainties dwarf the anomalies. To ignore that is being a mathematician looking for an answer not a scientist looking for answers.

Jim Gorman
Reply to  David Middleton
January 3, 2021 12:34 pm

Dave,
We’re not discussing ice cores, which individually and untouched, are also static in time by the way. However, if you extract a section, melt it and determine the composition of the various elements, you have non repeatable measurements. You are stuck with the uncertainties, accuracy, and precision of the instruments you use. You can’t use different instruments because the source is now gone. You can’t say measurements at different sections allow you to reduce the uncertainties at that first one because they are different things. You can’t say I can increase the precision of the first measurements because you are measuring different things.

Look I’m not saying you can’t normalize data. I am saying that your process can’t ignore uncertainty and precision. You can’t decrease variance by simple averaging. You can’t ignore stationarity when combining.
different time series not having similar statistical parameters.

Tim Gorman
Reply to  David Middleton
January 4, 2021 1:50 pm

David,

Normalizing data is not “adjusting” it. A temperature reading is a temperature reading. Any “adjustment” to it is for pushing an agenda. You simply cannot “adjust” away uncertainty in the individual temperature readings. You can’t use averages to do it, you can’t use guessing to do it, you can’t use statistics to do it. Even the use of anomalies is nothing more than using scaling to make differences sound worse than they really are. A change from 0.1C to 0.2C, a 100% change sounds much worse than a change from 20.1C to 20.2C, a .0005% change.

Even the use of anomalies is questionable from a physical science viewpoint. If you are trying to look at what is happening to a global average temperature then do so, average all the temperatures together. The use of anomalies tells you nothing about the global average temperature. If the use of anomalies says there is a 0.01C change from one year to the next then the average of all the temperatures should show you the same thing. If it doesn’t tell you the same thing then something is wrong with one of the processes.

Geoff has brought up a very good point about the use of temperature readings. A mid-range value is *NOT* the same as an average value. If you assume that the temperature curve is a sine wave (on a clear day and night it is very close to a sine wave) then the average temperature is actually T-peak * (.637). T-peak would be the mid-range value determined by (max-min)/2. As a mid-range value it actually represents the off-set from zero for the waveform, similar to a DC voltage offset of an AC voltage sine wave. But the mid-range value is certainly not the average temperature for the temperature profile.

Does this matter? Can’t you just factor out the .637? Nope. On days where you have clouds the shape of the daily temperature curve can be quite different in the day and the night. Thus you have to integrate the daytime temperature profile separately from the nighttime temperature profile to get an accurate representation of how “hot” the day actually was, assuming temperature is being used as a proxy for heat content (a poor assumption to begin with).

This is why I see the use of degree-day values as having far more accuracy for interpreting temperature data than just using a mid-range value, contemporary degree-day calculations “are” an integral of the temperature profile.

Even better would be to actually calculate the enthalpy at each station using an integral Most modern stations since the early 80’s record temperature, relative humidity, and pressure. With these the enthalpy can be calculated for each measurement. I’ll never understand why this isn’t being done.

Jim Gorman
Reply to  bigoilbob
January 3, 2021 9:13 am

Bob –> Each and every measurement of a temperature is a non repeatable measurement. Once made it recedes into the past never to be done again. This alone makes the CLT inapplicable to temperature measurements. You have one and only one measurement to use. That single measurement is all you have and all you will ever have. There is no data that can be used in a statistical analysis to resolve to a “true value”. If you like I can give you University references that reflect this. Ultimately, CLT only applies when you make multiple measurements of THE SAME THING with the SAME device.

From Britannica description of CTL: “Laplace and his contemporaries were interested in the theorem primarily because of its importance in repeated measurements of the same quantity. If the individual measurements could be viewed as approximately independent and identically distributed, then their mean could be approximated by a normal distribution.”

Yours and Dave’s description of oil/mineral deposits do not apply. Those are relatively static and can be measured again later in time. The measuring device in the future may give better accuracy and precision. But, the new measurements simply can not be said to make the previous measurements more precise or accurate. They are what they were and will remain that way forever.

If you try to combine the old measurements with the new measurements this is where metrology enters the picture. You can certainly average the readings but the CLT does not apply. First, using significant digits requires that the least precise measurements control the precision (the number of decimal place). Secondly, the uncertainties combine in quadrature (root-sum-square) where the measurement with the most uncertainty will dominate.

Bob, please spend some time with the documents I mentioned about uncertainty and error (they are not the same). There are other web resources that discuss in detail the vagaries of trending time series data. If you ask I can certainly provide some.

Brooks Hurd
Reply to  Shanghai Dan
January 2, 2021 8:17 am

Shanghai Dan,
This is what I was taught during my engineering education. This was reinforced by 6 Sigma course work. It is possible to reduce error by improving the design, but it never disappears.

It amazes me that many climate science papers ignore error.

Tim Gorman
Reply to  Brooks Hurd
January 2, 2021 8:35 am

Error and uncertainty are not quite the same thing. They don’t work in quite the same way.

MarkW
Reply to  Geoff Sherrington
January 1, 2021 12:12 pm

There was no “huge” drop in CO2 production due to the COVID lockdowns. At the max, it was about 10%, and that amount quickly decreased as lockdowns were eased around the world.

Tim Gorman
Reply to  Geoff Sherrington
January 1, 2021 12:23 pm

Geoff,

The accepted uncertainty of the Argo floats is +/- 0.5C. The accepted uncertainty of federal land based measurement stations is +/- 0.6C.

Even today we can’t discern a temperature trend unless it is more than 1C because of the uncertainty in the measurements. You can’t lower that uncertainty by using averages, using anomalies, using gridding, using homogenization of data, or any other statistical tricks.

Since each of these temperature measurements are independent measurements of different thigs using different measurement devices each data point represents a population of size one. Even combining maximum and minimum temps for one day results in the uncertainty growing by root sum square, i.e. a +/- 0.6C becomes +/- 0.8C when calculating a daily average.

It’s why, if the uncertainty interval for the GAT was to be blacked out along with the temperatures, the interval would overwhelm the actual values being graphed.

mkelly
Reply to  Geoff Sherrington
January 1, 2021 1:43 pm

Mr. Sherrington says: There is no accepted value of sensitivity relating temperature to CO2 in the air.

dT = Q/ Cp * m

This relates the temperature change to the specific heat of dry air of which CO2 is a component. The Shomate equation will tell us what the new Cp of dry air is with increase of CO2.

If we knew the accurate per cent of other gases it would be number crunching to see if a doubling of CO2 caused an increase temperature with same energy input.

Jim Gorman
Reply to  mkelly
January 1, 2021 4:25 pm

Part of the problem you have is using averages. CO2 concentrations can change dramatically in a period of hours and in different locations. Averages simply hide the variation and resulting differences in true values.

Geoff Sherrington
Reply to  mkelly
January 2, 2021 2:07 am

mkelly,
Have you read
https://arxiv.org/abs/2006.03098

This much-discussed paper from 2020 seems relevant to your concerns.
The air above us is not academically “dry”. Geoff S

fred250
January 1, 2021 2:49 am

What they refer to as “observed” temperatures bears very little resemblance to what was actually observed.

It is also highly contaminated by urban, airport, data from horrendously bad sitings, a whole lot of highly dubious infilling, “adjustments”, homogenisations, and smearing of urban data where it doesn’t belong .. etc etc

Where is the near ZERO warming trend from 1980-1997 as shown by satellite data

Where is the ZERO warming trend from 2001-2015.

Please show us a “model” which correctly hindcasts the 1940s peak similar to now, in the NH

…. and the 1880–1920 peak in the Australian region.

Unless a model can hindcast correctly to actual real data at least on a hemisphere basis..

… IT IS TOTALLY MEANINGLESS.

Last edited 20 days ago by fred250
Gerald Machnee
Reply to  fred250
January 1, 2021 9:24 am

**It is also highly contaminated by urban, airport, data from horrendously bad sitings, a whole lot of highly dubious infilling, “adjustments”, homogenisations, and smearing of urban data where it doesn’t belong .. etc etc**
This is missing in his charts. the other part missing is the closure of many rural stations around the world which do not show the warming.

Philip
Reply to  fred250
January 1, 2021 9:57 am

Amen, much of their temperature record seems to be a deliberate massage to the rise in CO2.

GPHanner
Reply to  Philip
January 1, 2021 12:02 pm

Created to feed hysteria.

Loren C. Wilson
Reply to  fred250
January 1, 2021 12:52 pm

Their models are meaningless anyway because they tuned them to the past since they don’t have the fundamental physics correct. A model tuned to the past cannot predict the future. Even though they can fit the past with some reliability, it’s like fitting data with a polynomial and then using it to forecast.

fred250
Reply to  Loren C. Wilson
January 1, 2021 1:20 pm

“Even though they can fit the past with some reliability”

They are fitting to a fabricated data set.

Its WRONG, with a built-in FAKE temperature trend, before they even start !!

Tom Abbott
Reply to  fred250
January 2, 2021 8:52 am

“They are fitting to a fabricated data set.”

Exactly right. So, any results they get will not represent reality.

Garbage In = Garbage Out

Jim Gorman
Reply to  Loren C. Wilson
January 1, 2021 4:32 pm

Trust me when I say you are correct as I have a lot of experience in forecasting telephone central office equipment. Forecasting a varying signal is as much art as it is mathematics. Regressions are only accurate within the time frame for which you have data. As you expand into the future the value of regression becomes less and less. For proof, why do we still have economists, financial planners, etc.?

January 1, 2021 2:57 am

I keep pointing out to the Berkeley Earth team that their own data clearly indicate that climate sensitivity is far below the IPCC’s central value of 3°C (ECS). They keep ignoring it.

This is Berkeley Earth’s tweet, to which I replied:
https://twitter.com/BerkeleyEarth/status/1265596887698214912

This is the 1st tweet of my 11-tweet-long “tweetstorm” reply:
https://twitter.com/ncdave4life/status/1272825217073766402

This is the “unrolled” conversation (see it all on one page):
https://threader.app/conversation/1265589834548215808/JT0sfjnKs3

comment image

https://twitter.com/ncdave4life/status/1284158384661569536

https://twitter.com/ncdave4life/status/1307366058114850820

https://twitter.com/ncdave4life/status/1308443108678553603

https://twitter.com/ncdave4life/status/1336701210468945923

https://twitter.com/ncdave4life/status/1336707443750035459

Last edited 20 days ago by Dave Burton
Reply to  Dave Burton
January 1, 2021 12:17 pm

BTW, there are several available tools for “unrolling” tweetstorms and twitter conversations. The one that I used above is Threader. I chose it for this twitter thread because it is capable of including both sides of a conversation (in this case, @BerkeleyEarth and me). But there are several other good tools available, as well:

● Threader: https://threader.app  @Threader_app
● ThreadReaderApp: https://threadreaderapp.com  @ThreadReaderApp
● Rattibha: https://rattibha.com  @Rattibha
● TheRip: https://the.rip  @threadrip

They each have idiosyncrasies. One or two of them require “logging in” to their site, or “following” them on Twitter, before tweeting to their tool to trigger them. Some require a specific keyword in the tweet, like “unroll” or “compile”. Some will only find recent tweets (e.g., within the last week).

The most popular, currently, is @ThreadReaderApp. It does a nice job of unrolling tweetstorms, though it only show’s a single person’s tweets (but that’s often what you want). It inserts way too many obnoxious ads into the generated web page, but using an ad-blocker solves that problem. (I use uBlock Origin, which is available for Chrome, Opera, Firefox & Edge.)

Threader is add-free, and can compile both sides of a conversation.

Rattubha can also generate a nice .pdf version of the thread.

ThreadRip creates a simple, concise unrolled thread that can easily be copied to the clipboard and then pasted into emails, documents or blog posts.

I’m still experimenting with these unrolling tools, but I’ve found things to like about all four of the ones mentioned above. I sometimes several unrolled versions of a thread, all at once, by tweeting a reply to the last tweet in a thread which triggers multiple tools; something like this:

 @threadrip @Rattibha @ThreadReaderApp @Threader_app please unroll / compile this.

Then I inspect the results, to see which one looks best, for that particular thread. (They do different things with cropping/shrinking images, etc.)

Scissor
Reply to  Dave Burton
January 1, 2021 2:59 pm

Interestingly, Arrhenius gave the temperature of the earth as 15 C in 1896. That’s what Berkeley Earth says it is today.

January 1, 2021 3:09 am

Geological Society of London Scientific Statement: what the geological record tells us about our present and future climate

Reply to  Philip Mulholland
January 1, 2021 4:23 am

There are 16 authors for this scientific statement with 14 affiliations
I categorise their affiliations as follows:
University Department 13
Government Organisation (British Antarctic Survey)  1
 
There is no one from the British Geological Survey (BGS) – the Lyell Centre at Heriot Watt is a separate university body, and certainly no one from industry.

This is the first scientific paper I have ever read that starts with an Executive Statement rather than an Abstract. I must remember to adopt this style from now on.
 

Reply to  Philip Mulholland
January 1, 2021 4:50 am

Always remember the first rule of funding:
Never bite the hand that feeds you.

Clyde Spencer
Reply to  Philip Mulholland
January 1, 2021 11:42 am

And the corollary is “Never feed the hand that bites you.”

Reply to  David Middleton
January 1, 2021 7:05 am

“It’s a U.S. Geological Survey Open-File Report.”
David.
Link please.

bonbon
Reply to  Philip Mulholland
January 1, 2021 7:53 am

That “statement” sure looks like it is written for the Board of shareholders.
I presume it is written for the coming Great Reset when they hope trillions of green credit would flow, as gushed by Mark Carney, with conditionalities for sure.

Every corporate statement will be vetted, and it now looks like Uni reports. That is the definition of Corporatism, otherwise known as Gleichschaltung.

Joseph Zorzin
Reply to  Philip Mulholland
January 1, 2021 1:42 pm

“the current rate of CO2 (and therefore temperature) change is unprecedented in almost the entire geological past.”
Sounds like they are 100% certain of the connection between CO2 and temperature. Such certainty seems a bit presumptuous and unscientific.

January 1, 2021 3:25 am

„If the models are reasonably accurate, the early 20th century warming can be explained by natural forcing mechanisms.”
Reducing the scientific interest in the EAW (1918 + two decades) to natural forcing, would mean to lose one of the most interesting open question in climatology; see  https://1ocean-1climate.com/arctic-warming-100-years-ago-due-to-naval-war-in-europe-1914-1918/

The Washington Post reported on 2 November 1922 based on information by the American consul in Norway to the U.S. State Department in October 1922, with the sensational hint that in 1918 a strong warming began. (published in the Monthly Weather Review) (copy attached)
Particularly the Arctic winter temperatures increased pronounced from  about 1920 to 1939 as shown in this graphic. comment image

huls
January 1, 2021 3:39 am

Nice to know water vapour has nothing to do with it!

Redge
Reply to  David Middleton
January 1, 2021 5:16 am

David,

We know CO2 concentrations have risen to approx 420ppm, CH4 approx 1850ppb and N20 approx 330ppb.

This must mean some other molecule or molecules must have decreased.

Any ideas which molecule(s) are less abundant (I know ppm and ppb are hardly abundant)

TIA

Redge
Reply to  David Middleton
January 1, 2021 5:25 am

That’s what I thought but as you say not proportionally

Where have all the molecules gone! We’re doomed…..

Jim Ross
Reply to  David Middleton
January 1, 2021 8:38 am

The net atmospheric O2:CO2 molar exchange ratio over the longer term looks remarkably consistent to me (assuming that N2 is not changing significantly relative to changes in O2), but it is not 1.1:1 (photosynthesis/respiration) nor is it 1.4:1 (estimate for fossil fuel burning, which should be increasing over time as the mix changes from less coal to more gas). It is approx. 2.2:1 (10.353/4.8).
comment image
 
The data shown above are monthly, adjusted to remove the annual seasonal cycle, as published by Scripps and available here: https://scrippso2.ucsd.edu/.
 
The data are from four sites:
ALT: Alert, NWT, Canada
MLO: Mauna Loa Observatory, Hawaii
SAM: American Samoa
SPO: South Pole
 
The gradient of each dataset is almost identical but, not wishing to mix data from different locations, I have just provided the gradient and R squared for Mauna Loa alone (the red dots).

Last edited 20 days ago by Jim Ross
huls
Reply to  David Middleton
January 1, 2021 11:57 pm

Water vapour is the most variable in concentration and largest contributor to T delta’s in the Earth’s atmosphere.
I did not mention a timeframe. Why do you?
Here’s my sources: https://en.wikipedia.org/wiki/Greenhouse_gas

Where are yours?

huls
Reply to  David Middleton
January 2, 2021 12:12 pm

Water vapour accounts for the largest percentage of the greenhouse effect, between 36% and 66% for clear sky conditions and between 66% and 85% when including clouds.
The amount of water vapour varies dramatically between 10ppm and 50.000ppm (yup that’s the range) driven by localities and circumstances.
Time has nothing to do with it.

bonbon
January 1, 2021 4:01 am

Only with a coup could such policies as proposed by Prince Charles be enforced.
It sure looks like the USA has become a Dominion not a Republic.
Welcome back to the Commonwealth!

Chris Wright
Reply to  bonbon
January 1, 2021 4:39 am

I’m confused. The comments by Jim Reilly are actually very good e.g. it starts with:
“The world’s climate is changing, as it always has…..”
And yet he is apparently responsible for a “proof” of human-caused CO2 warming that is so bad as to be close to fraudulent.
Graph A appears to show that the climate model has made a near-perfect prediction of global temperatures since 1880. In reality models have turned out to be hopelessly wrong, typically forecasting warming up to 3 times higher than reality.
Almost certainly the model is recent, certainly no more than a few years old. If so, then they already knew the answer before creating the model.

To reproduce this “proof” I would do the following:

  1. Put in an assumed warmingh due to CO2. The greater the assumption, the more dramatic the “proof”.
  2. Adjust the model to reproduce the historical temperatue record from measurements. These adjustments will automatically remove any false trends introduced by the CO2 assumption. Parameterisation is used on all climate models, because an honest model based purely on physics is completely impossible for a variety of reasons. One modeller referred to parameterisation as their “dirty secret”. As Willis and others have commented, ongoing parameterisations will almost certainly lead to models fitting historical data, rather like Darwinian evolution (survival of the fittest parameterisations). It is in effect curve fitting that has nothing to do with understanding how the climate works.
  3. Run the full model as it stands (Exhibit A, shown above). Because it has been adjusted to fit the data, it will fit the data.
  4. Now remove the CO2 assumption and run the model again (Exhibit B, shown above). With the CO2 assumption removed, the predicted warming will be much less, and the graph will not fit the historical data. Of course, the greater the CO2 assumption, the bigger the difference and the more dramatic the “proof”.

Of course, this would be junk science because the “proof” was created by the adjustments – sorry, I mean parameterisation. Putting this “proof” forward is pretty well fraudulent, particularly if the person knows that the models are adjusted, which Reilly would have known. It does him no credit, and his quoted words appear to be empty retoric. Perhaps Jim Reilly should practice what he preaches.
Chris

EdB
Reply to  David Middleton
January 1, 2021 8:41 am

There is no model fit whatsoever for the 1910 to 1940 temperatures. How can they say that the models work?

EdB
Reply to  David Middleton
January 1, 2021 9:42 am

In an engineeering world, as opposed to the fake science world, an engineer would say the model does not work. The model is off by a factor of 2 in 1910, and a factor of 4 in 1940.

That’s garbage.

Dave Fair
Reply to  David Middleton
January 1, 2021 2:06 pm

Look at the median model line; it does not reflect the 30+ year dramatic warming nor the subsequent significant cooling. The clear multi-decadal actual temperature trends are not reflected by the models, even with 20/20 hindsight. To claim that actual temperatures staying within the wildly varying results of hot vs. cold models defines the accuracy of models is beyond absurdity.

Dave Fair
Reply to  David Middleton
January 1, 2021 4:59 pm

If, in hindcast, the UN IPCC climate models cannot determine the factual El Nino dominated periods from the La Nina, then what are they modeling? While I believe it is now impractical to predict ENSO periods in the future, could the modelers at least identify the climatic factors influencing past ENSO events?

Tim Gorman
Reply to  David Middleton
January 1, 2021 12:33 pm

If you take the modeled and observed data out of the graph and just leave the uncertainty interval I can generate almost any kind of trend line you want. Even a horizontal line from 1930 to 2020. I.e. the top of the uncertainty band in 1930 was about 0.4C and the bottom of the band in 2020 is just about the same 0.4C. Each point, 1930 and 2020, is just as likely to be the true value for those years as any other point in the uncertainty interval. The “stated” observation value is *not* the true value any more than any other value in the uncertainty band.

It doesn’t do any good to show the uncertainty in the anomaly if you aren’t going to pay attention to it.

Dave Fair
Reply to  Tim Gorman
January 1, 2021 2:22 pm

There is no uncertainty as defined in mathematics; at any given point in time, the CliSci “uncertainty” is defined by the output of the hottest speculative model vs. the coldest speculative model. The further backward or forward in time one goes from the late 20th Century tuning period, the greater the differences between the outputs those independently speculative models becomes. The differences between the models seem to be primarily caused by the different ECSs assumed by the different modeling teams. As stated by members of those teams, the models are adjusted until they achieve an ECS that “seems about right.”

Carlo, Monte
Reply to  Dave Fair
January 1, 2021 4:13 pm

More importantly, they reflect the modelers’ crystal ball gazing as to what CO2 versus time in the future will be.

It is my opinion that averaging the outputs of multiple climate models produces numbers that are completely without meaning.

Dave Fair
Reply to  Carlo, Monte
January 1, 2021 5:09 pm

Only if every modeling team were forced to use the exact same historical data (especially assumed forcings of GHGs and aerosols) would comparing one model to another make any sense. It is a travesty that the different teams can start with a difference of over 3C in assumed baseline average global temperatures. None of them model the same universe.

Jim Gorman
Reply to  Dave Fair
January 1, 2021 5:09 pm

Ahhhh, spoken like a true mathematician. The uncertainty being discussed is primarily measurement uncertainty of the data used to populate model parameters. Things such as uncertainty adding in quadrature or root-sum-square rather than root-mean-square as used in statistics. Things like accuracy, precision, significant digits, systemic uncertainty. What you learn in metrology classes versus statistics classes. Google “Guide to

What you are discussing is the mathematics of the models. Did it ever occur to you that the numbers the models use are based on inexact physical measurements that contribute to the uncertainty of the outputs of the models. These aren’t just numbers extracted from a database to be treated as if they are nothing more than numbers on a number line that can be massaged and tortured to achieve a desired output.

Read this for a starter. guide to uncertainty in measurements – Bing

Dave Fair
Reply to  Jim Gorman
January 1, 2021 5:46 pm

As I said before, its the only game in town. Taken to the extreme, error would never let you agree on any set of numbers nor model anything. Sadly, being a pureist never allows one to participate in real-world arguments. Purists keep banging on that nail-head while everyone else has gone on to more productive discussions. There are many other ways to show that the UN IPCC models are bunk. Arguing angels and pinheads should be left to academic journals, not in discussions of the basics of bureaucratic policymaking. [And I am not a mathematician.]

Jim Gorman
Reply to  Dave Fair
January 2, 2021 4:53 am

Would you fly in an airplane that had been built solely from modeled data and NO physical measurements? How about buy a car whose engine had been built only from modeled data with no physical testing?

Models have a place to establish a starting point if they can be validated. NO climate model has ever been validated nor have they provided accurate forecasts. Therefore, they are useless for making policy decisions.

Dave Fair
Reply to  Jim Gorman
January 2, 2021 5:09 pm

I thought I had said that. Arguing mathematics with a politician is a fool’s errand. Argue facts and money.

Tim Gorman
Reply to  Dave Fair
January 2, 2021 5:51 am

Dave,

You can’t model what you can’t measure. The GAT is a calculated value and not a measured value. Every temperature input to the GAT and CGM’s are averaged values, in other words calculated data and not measured data.

That’s the biggest reason the GAT is so meaningless and the CGM’s are so useless. Every useful model that I know of has been validated against physically measured values, from the lift of an airplane wing to the drawbar horsepower of a modern tractor.

This isn’t arguing angels and pinheads. It’s a validation of every engineering principle I have ever learned.

Dave Fair
Reply to  Tim Gorman
January 2, 2021 5:20 pm

The UN IPCC and every other governmental body uses GAT and GCMs. Like I said, its the only game in town. The fact that you don’t believe in them means nothing in the argument over CAGW. Arguments over what Argo, satellites and radiosondes are showing vs GCM outputs would seem to be more productive.

Geoff Sherrington
Reply to  Jim Gorman
January 2, 2021 7:45 pm

Thank you, Jim Gorman.
You have experience with errors and uncertainties that seems not to be matched by many (any?) in the climate research sector. Thank you for expanding on the summary points I made in the first comment here.
The problem is far more serious than those in the climate sector are prepared to understand and admit. It makes much climate research a farce.
Error estimates were created by mathematicians and statisticians because they are important uses in the understanding of masses of data. They are not to be estimated by (say) a simple fit of a linear equation through a cluster of points, because that will often not include the full scope of the errors. Uncertainty has much scope. For example, as a fairly trivial example, can we be certain that the diameters of the glass bores of mercury thermometers were as constant in old thermometers as in recent ones? Uncertainty can arise without numbers, but more comprehensive understanding relies much on them. Geoff S

John Shotsky
Reply to  Chris Wright
January 1, 2021 6:29 am

Even simpler would be to simply run any/all of the models backwards in time. Start with now and run the models as if CO2 was dropping from 420 ppm to 275 ppm- see how long it takes to show an ice age.

Redge
Reply to  bonbon
January 1, 2021 5:20 am

Bonbon,

Where on earth are you getting your info from? You’ve posted this nonsense several times now.

Prince Charles will never be king of the USA – he’ll be lucky to be King of England and I’m a monarchist! Just not Charlie

bonbon
Reply to  Redge
January 1, 2021 7:28 am

‘Fraid your are not listening up!
See Prince Charles’ gushing statement for the Davos Special.
Look up the interview with Margaret Atwood on BBC radio Dec 29.

President Trump dumped the Paris trap, it looks like Biden would intend its full implementation. Even the new EU Deal with China has this poison in it.
Look at BoJo’s unbelievable green-iness.

So the Green King, anybody? I wonder if Greta would be summoned to the Palace?

Mark Carney of UN Finance is laying the groundwork, with the FED and BlackRock in tow, and Titanic digital credit pumping at full throttle.

This makes Wonderland look sedate…

Redge
Reply to  bonbon
January 1, 2021 7:31 am

Oh, dear

Dave Fair
Reply to  bonbon
January 1, 2021 2:29 pm

When U.S. Federal politicians begin debating the costs of the various GND schemes, the U.S. will determine the fate of worldwide CliSci. Screw the opinions of kings, queens, princes, billionaires, pundits, celebrities, etc.; the American middle class always gets what it collectively wants. Beware of a future where ChiCom dictators take over that role.

Ewin Barnett
January 1, 2021 4:24 am

I think it is misplaced to fret about any change in mean temperature until we know what the optimum for the biosphere should be. Are we above or below it presently? Converging or diverging?

is it just me, or is this the elephant in the room, the question that shall nor be asked?

Last edited 20 days ago by Ewin Barnett
Jim Gorman
Reply to  David Middleton
January 1, 2021 5:23 am

The amount of “contribution” should be the issue. Almost every study and paper I read simply ignore natural variation as being part of the temperature increase, and let me add “perceived” temperature increase. When breaking the Tmax and Tmin temps down, there is even less evidence that the globe is going to burn up.

Klem
Reply to  Jim Gorman
January 1, 2021 7:05 am

Adding a single molecule of water vapour to the atmosphere ‘contributes’ to warming, so stop washing your clothes and watering your lawn.

DMA
Reply to  David Middleton
January 1, 2021 10:00 am

“We have fairly good evidence that we have contributed to the warming over the past 40-50 years.”
Most of the evidence is based on the incorrect assumption that human emissions have caused the increase in CO2 content. Because it has been shown that only about 15% of the increase is due to humans and there is little reliable evidence that the changing CO2 content is driving temps our contribution is minor and certainly not a reason to panic.

Jim Ross
Reply to  David Middleton
January 2, 2021 2:32 am

David,

Could you provide a link to a paper/article/figure which describes the basis for this statement, please. I am genuinely interested, having looked at the δ13C measurements there. Thank you.

Jim Ross
Reply to  David Middleton
January 2, 2021 10:52 am

David,
 
Thank you very much for your response. The Law Dome data support the recent direct observations that the incremental atmospheric CO2 since 1750 has had a net δ13C content that is not changing over time (at -13 per mil) other than short-term fluctuations (ENSO, Pinatubo) and yet I have not seen a recent model that is able to match this observation, hence my interest in your statement. Since the experts are still having difficulty obtaining a good match against the observed δ13C depletion, even with a very sophisticated model, I am unwilling to say that anything in respect of atmospheric CO2 growth is “conclusively demonstrated”.
 
References: Figure 1 in Kőhler et al (2006): http://www.biogeosciences.net/3/539/2006/bg-3-539-2006.pdf and Keeling et al (2017): https://www.pnas.org/content/114/39/10361.

Jim Ross
Reply to  David Middleton
January 2, 2021 11:59 am

I have to disagree. It’s not about the trend being consistent with (or probably better stated as “not inconsistent with”) some theory or other. The data demonstrate that the value is essentially a constant. That is the simple fact that must be explained.

Jim Ross
Reply to  David Middleton
January 2, 2021 12:08 pm

Please take this data and derive the δ13C content of the incremental CO2. It requires application of the Keeling plot. It is not consistent with an anthropogenic source (which would show a δ13C content of about -28 per mil).

Jim Ross
Reply to  David Middleton
January 3, 2021 3:02 am

OK, let’s take your three points individually:
 
“δ13C of gasoline and diesel exhaust range from -25 to -30 per mil, the exhaust from natural gas combustion is in the -40 to -43 per mil range.”
 
The approximation widely used in the literature for the burning of fossil fuels is -28 per mil, as I stated. If you are suggesting the experts are wrong and it is actually lower, that is fine with me as it makes the discrepancy between this figure and what the data tell us even larger (see my response to your third point). The other point is, of course, that the estimated ratio is recognized in the literature as decreasing over time as the fuel mix changes.
 
“δ13C of the atmosphere is in the is around -8 per mil.”
 
Actually, it is around -8.5 per mil these days, but no big deal.
 
“Emissions from fossil fuel combustion aren’t replacing the atmosphere. They are increasing the δ13C depletion. The Law Dome ice core isn’t measuring exhaust fumes. It’s measuring the atmosphere.”
 
Well doh! How could I have missed that vital point (/s)? What you are missing, or failing to understand, is this. If we know the level of atmospheric CO2 at two different times and we also know (have measured) the δ13C at those times, we can determine the average content in δ13C terms of the incremental CO2. This is the basis of the Keeling plot. Let me give you an example. If I read off your graph above that atmospheric CO2 between 1991 and 2006 has increased from about 355 to 380 ppmv and atmospheric δ13C has decreased from -7.82 to 8.19 per mil, we can determine that the extra CO2 must have had, on average, a δ13C of ….
 
((380*-8.19) – (355*-7.82)) / (380-355)
 
Which gives us -13.4 per mil (close enough for government work). This equation is widely used in the literature including in both of the references I provided links to previously. It is essentially a mass balance of 13C, with a very small approximation that 12C is equal to total C (it is about 99% of the total).
 
This is just an average, however, and we want to know how it changes (or not) over time. The basis of the Keeling plot, which is where 1/CO2 is plotted against δ13C, is that it will be a straight line only if the incremental CO2 has a constant δ13C ratio. So, here we are back at the Law Dome data. The reference that I gave above as Figure 1 in Kőhler et al (2006): http://www.biogeosciences.net/3/539/2006/bg-3-539-2006.pdf shows the Keeling plot for Law Dome and documents a value of -13.1 per mil with an R squared of 0.96. On the other hand, if I just plot the actual atmospheric sample measurements from the South Pole observatory, available from Scripps, we get -13.0 per mil (the intercept) and an R squared of 0.99:
 
comment image
 
Ultimately, my point is this. Yes, the atmospheric δ13C value is decreasing because the incremental CO2 has a lower δ13C ratio on average than the current atmosphere. However, the decrease is not consistent with the incremental CO2 being entirely or largely due to anthropogenic sources. My other reference, Keeling et al (2017): https://www.pnas.org/content/114/39/10361 introduces multiple very significant adjustments to the atmospheric δ13C data (mainly differential fractionation assumptions) and yet is still unable to match the observed values. They only succeed in matching the trend (since 1977) by introducing a new variable. Elephants and wiggling trunks come to mind. This is the most recent model work that I have seen with respect to δ13C. The fact that the model is unable to fully explain the observations is why I do not like statements such as “conclusively demonstrated” in this context.

Jim Ross
Reply to  David Middleton
January 3, 2021 9:31 am

No you don’t. The method only looks at the atmosphere and derives the δ13C of the incremental CO2 in the atmosphere that matches the observed changes in atmospheric CO2 and atmospheric δ13C. It is an observed fact. It does not consider sources and sinks; it is simply the relationship found in the data (as I showed with the Keeling plot for the South Pole data). Nevertheless, it is a net effect, so does not preclude multiple sources and sinks being involved. However, the more sources/sinks you believe are involved, the less likely a simple linear relationship would be the result on the Keeling plot.
 
The relationship (constant δ13C of -13 per mil) is so strong that it can be used to predict future values of atmospheric δ13C for future values of atmospheric CO2 based solely on an initial assumption of 278 ppmv and -6.4 per mil (the assumptions of Keeling et al for 1765). If all the incremental CO2 up until 1991 (i.e. an increase of atmospheric CO2 from 278 to 355 ppmv based on the plot you showed earlier) had a constant δ13C of -13 per mil, the predicted atmospheric δ13C in 1991 would be:
 
((278*-6.4) + (77*-13))/355 = -7.83 per mil. Is that close enough to -7.82 for you?
 
Now, if you wish to establish a model that explains ‘why’ the data show a value of -13 per mil, then you definitely do need to consider the behavior of all sources and sinks together with the isotopic fractionation across boundaries, and this is where the sophisticated model of Keeling et al was having some difficulty matching the observations. Perhaps it is too sophisticated!

Jim Ross
Reply to  David Middleton
January 3, 2021 10:46 am

The value of -13 per mil is an observed characteristic of the data which can be demonstrated by the Keeling plot and also by using it to predict future atmospheric δ13C values without any consideration of models or sources/sinks (as I demonstrated above for the period from 1765 to 1991). If you want to understand why it is that value, then you do need a model that incorporates all relevant inputs as I stated in my final paragraph. Even then, Keeling et al were unable to fully match the δ13C observations.

I did not see anywhere in Tans’ slides that explained the atmospheric δ13C observations. Perhaps you could point that bit out for me.

Tom Abbott
Reply to  Jim Ross
January 4, 2021 4:34 am

“Since the experts are still having difficulty obtaining a good match against the observed δ13C depletion, even with a very sophisticated model, I am unwilling to say that anything in respect of atmospheric CO2 growth is “conclusively demonstrated”.”

Thank you very much! “Conclusive” is a very big, bold word. Extraordinary claims require evidence to be believed. I have seen no conclusive evidence that CO2 and temperature rise or fall are connected.

Tom Abbott
Reply to  David Middleton
January 4, 2021 4:30 am

“Despite relatively wide swings in temperature from the Medieval Warm Period to the Little Ice Age, there was very little change in atmospheric CO2.”

And yet you think CO2 and temperature change are connected? That statement seems to contradict that thinking.

Tim Gorman
Reply to  David Middleton
January 2, 2021 5:57 am

Dave,

So 20% to %50 of the rise is natural. Guess what? That simply means the Earth is going to get hotter no matter what we do. The only difference will be the timeframe in which it happens.

So what are we expected to do to stop the natural climate change? It just means that it will be our great-grandchildren that will die instead of our grandchildren.

Unless anthropogenic climate change is 100% of the change there simply isn’t anything we can do to affect the ultimate result!

Tim Gorman
Reply to  David Middleton
January 2, 2021 8:47 am

David,

I haven’t seen much in the way of economically viable things to slow down anthropogenic growth in CO2 other than what industry is doing on a private basis, e.g. converting from coal to natural gas. EV’s are not economically viable because of infrastructure costs. Unreliable wind and solar are not economically viable. Batteries are not economically viable on a grid-level scale.

Are hybrid ICE/EV autos economically viable? Not based on their growth in the market place.

Hydrogen? I can’t help remembering the Hindenburg.

Tim Gorman
Reply to  David Middleton
January 3, 2021 10:49 am

EV’s will only make inroads via government mandates. If they actually provided increased utility over ICE vehicles they would have a *much* larger market penetration than they do.

I keep hearing about battery improvement. I’ve heard that over the past 25 years. But no real innovation has occurred at all. Just incremental improvement of existing technology!

Hydrogen? Same thing. No one yet has a solution for how to pipe and distribute hydrogen. Existing pipe material degrades too fast to be useful. I keep hearing about how it’s going to happen but it never does.

The problem is that there is nothing for the government to subsidize that *WORKS*. Unreliable energy doesn’t work not matter how you subsidize it. Ask Gavin Newsome. And that includes batteries that die in your car during the next blizzard or hurricane evacuation.

What works is natural gas. *THAT* is what should be subsidezed instead of being penalized. What works is nuclear. *THAT* is what should be subsidized instead of penalized. What works is gasoline and diesel. *THAT* is what should be subsidized instead of penalized.

Tom Abbott
Reply to  David Middleton
January 2, 2021 5:34 pm

Could you do an article on the Law Dome DE08 and show us how that conclusively demonstrates what you say it demonstrates?

Tom Abbott
Reply to  David Middleton
January 4, 2021 4:46 am

Well, how about doing one that talks about Dr. Happer’s latest research where he claims that CO2 is “saturated” at our current concentrations of CO2 in the atmosphere, and this means that CO2 has very little ability to affect the temperatures above current concentrations?

If that is the case, then it doesn’t matter how much CO2 was in the atmosphere in past times, CO2 can only raise the temperatures so much.

I don’t know if you have thought about this or not, but what does Dr. Happer’s research do to your “conclusive” claim that CO2 has a large effect on the Earth’s temperatures?

Dr. Happer says CO2 has a negligible effect on temperature above the concentrations we currently have in the atmosphere (about 415ppm), so concentrations above that amount in the past would also have negligible effects.

So your ice core data may not be giving us the whole picture.

alexei
Reply to  David Middleton
January 1, 2021 12:20 pm

““We have fairly good evidence that we have contributed to the warming over the past 40-50 years.””

This sounds straight out of the AGW hymn book – and seems antithetical to the general sceptical view. Can you elucidate?

fred250
Reply to  alexei
January 1, 2021 12:39 pm

“““We have fairly good evidence that we have contributed to the warming over the past 40-50 years.”””

Yes, data manipulation, UHI effects, airport sites, infilling, and massive homogenisation and data “adjustment” effects….

…. have contributed to the FABRICATED and nonsensical, AGW-biased, calculation of the Global Average Temperature.

Tom Abbott
Reply to  alexei
January 2, 2021 5:37 pm

“This sounds straight out of the AGW hymn book – and seems antithetical to the general sceptical view.”

It does, and it is.

Mike Dubrasich
Reply to  David Middleton
January 1, 2021 1:27 pm

We don’t know if additional warming would be good, bad or indifferent.

Well I do, even if you don’t. A warmer Earth would be vastly better than the one we have today. Warmer means longer growing seasons, more rain, more bio-productivity, more bio-diversity, more Life in general. Life is a good thing, as opposed to death.

For 99% of the last 240,000,000 years Planet Earth has been much warmer than today. We are mired in the Ice Age. The normative condition for this planet is 20ºF warmer at the poles. The normative is desirable, more so than extensive cold and ice.

Every major extinction event in geologic history was due to cooling. Ice is death. Nothing lives on ice. Unless you are pro death, you should be pro warmth.

CO2 is the fundamental building block of life. It is more than good; it is essential. CO2 may or may not be warming the planet, but if it is then good, hooray, thank you very much. More life is better.

I guarantee that if the average global temperature jumped up 10ºF or more, it would be wonderful. The oceans will not boil into Outer Space. That’s crazy talk. Every silly alarmist prediction will not occur. Life would benefit. Warmer Is Better.

Dave Fair
Reply to  David Middleton
January 1, 2021 2:39 pm

I believe that “pursuit of happiness” was substituted for “property” because the northern U.S. States did not want to allow the southern States to claim slaves were property and slavery was thereby protected in perpetuity by the Constitution. The pre-existing natural right to life, liberty and property were meant to be protected by the Constitution from arbitrary, unlawful acts by the government.

Dave Fair
Reply to  David Middleton
January 1, 2021 4:53 pm

And any reading of UN IPCC and green/leftist NGO literature (all being marxist) shows that they are very clear in desiring to cast the fruits of my labor far and wide, at their discretion. Screw ’em!

Tim Gorman
Reply to  Dave Fair
January 2, 2021 6:01 am

You are mixing up the Declaration of Independence and the Constitution.

There is no “pursuit of happiness” in the Constitution. And the Declaration of Independence had nothing to do with protecting slavery.

Chris Wright
Reply to  David Middleton
January 2, 2021 2:56 am

“We don’t have the slightest clue as to what the optimal temperature should be.”
A few weeks ago I came across a study looking into the optimum conditions for life on other planets. The researchers stated that the optimum global temperature for life was five degrees warmer than Earth. Another poster referred to this here recently.
The Holocene was characterised by a very large and dramatic warming of several degrees. And what happened? The rise of human civilisations. I believe the trend since then has been consistent cooling, which is a bit worrying. The next ice age is on the way, but hopefully not for a few more thousand years.
Yes, I’m sure we can’t put a pricide figure for the optimum temperature, but it probably is a few degrees warmer than today. Most global warming occurs in winter, at night and in cooler climates.

“We have fairly good evidence that we have contributed to the warming over the past 40-50 years.”
I would like to know what evidence you’re referring to. The one referred to in this article is based on junk climate models and is close to scientific fraud.
I think most sceptics would agree that CO2 has contributed some warming, but the amount is small and possibly trivial. The world cooled about one degree during the Little Ice Age. And how much global warming has occurred since the end of the LIA in 1850? About one degree. This suggests two things: that most if not all the warming was natural – and that it was of huge benefit to humanity and the natural world.
Chris

Tom Abbott
Reply to  David Middleton
January 2, 2021 3:59 pm

“We have fairly good evidence that we have contributed to the warming over the past 40-50 years.”

Got a link to that evidence?

Tom Abbott
Reply to  David Middleton
January 4, 2021 5:32 am

“If the TCR is +1.10 ± 0.26 K, human activities have caused about 3/4 of the warming over the past 40-50 years.”

Human-derived CO2 does not explain the equivalent warming from 1910 to 1940.

It was just as warm in the 1930’s as it is today, the magnitude of the warming is the same, and the temperature rises are equivalent.

Something else, other than human-derived CO2 caused the warming from 1910 to 1940.

I would submit it was Mother Nature. I would also submit that the period from 1980 to the present is just a repeat of the past period, governed by the same forces, and CO2 is just along for the ride.

Jay Willis
Reply to  Ewin Barnett
January 1, 2021 5:06 am

Great comment. We should start with optimum co2 levels. But this is exactly like asking where the optimum position of the moon is with respect to the earth. It exposes the whole load of bollox for what it is. As one commenter states above – empty rhetoric from the usgs.

bonbon
Reply to  Jay Willis
January 1, 2021 7:37 am

LOL! Very good point!
The optimum position of the Moon, as Krafft Ehricke noted is exactly where it is – within reach of available chemical propulsion.

Then the optimal biosphere is where it is – accessible with modern agricultural technology.

I guess that kind of pre-established harmony drives the eco-nuts, well, nuts.

Jeff Alberts
Reply to  bonbon
January 3, 2021 5:23 pm

pre-established harmony”

You’re joking, right?

Reply to  Ewin Barnett
January 1, 2021 12:40 pm

When climatology was a geophysical science, instead of political science, scientists called warm periods, including periods significantly warmer than now, “climate optimums,” because just about everyone understood that they are preferable to cold periods.

What’s more, “global” warming from eCO2 isn’t actually very global. It doesn’t much affect tropical high temperatures, which is nice, because the tropics are warm enough already. As Arrhenius predicted in 1896, higher CO2 levels disproportionately warm cold winter nights at high latitudes, thereby making their brutal winters a little bit less harsh, and slightly extending their short growing seasons.

Here are a couple of excerpts from Arrhenius’s 1908 book, discussing the beneficial effects of anthropogenic CO2 emissions:
comment image

Last edited 20 days ago by Dave Burton
Joseph Zorzin
Reply to  Dave Burton
January 1, 2021 1:54 pm

As I guy who has worked out doors all year long since1973 in cold, damp Massachusetts- I vote for warmer and drier. Bring it on!

Jeff Alberts
Reply to  Ewin Barnett
January 3, 2021 5:06 pm

Mean temp is meaningless, utterly.

Jeff Alberts
Reply to  David Middleton
January 5, 2021 8:39 pm

It is if you’re taking the mean of temp readings from different locations.

Tim Gorman
Reply to  Jeff Alberts
January 6, 2021 5:55 am

Jeff,

I agree with you. First off they determine a mid-range value at each station and mid-range is not the mean of the temperature profile over 24 hours. So they have already started off wrong statistically. Secondly they assume the stated value of the temperatures are 100% accurate and do not do an uncertainty analysis. They then try to add that mid-range value from one station to another in order to calculate an average between the two, again without doing any kind of uncertainty analysis for the combination. Then they do it over and over and over again.

What they wind up with is a meaningless value as it applies to “climate”. Climate is the entire temperature profile, daily, monthly, and seasonally.

It’s why I am such an advocate of using heating/cooling degree-day values. These are determined by integrating the entire temperature curve using absolute temperatures. They give a much better picture of the actual climate at any location. At *that* would not be meaningless.

Tenuc
January 1, 2021 5:15 am

From the article…
“…If the models are reasonably accurate…”

The GCMs used are demonstrably not event approaching any degree of accuracy. They are not forecasts, and do not claim to be so.

The huge problem that prevents accuracy is that our dynamic and turbulent weather systems are driven by palatially bounded spatio-temporal chaos, which strives towards maximum entropy production (MEP). It is as futile to use linear averages for temperature trends, for example, as it is to average the numbers in your local telephone directory.

Better mathematical tools need to developed before we can ever hope to have accurate weather/climate, forecasts. I think we are many decades away from achieving this. In the meantime, any WAG is good enough, providing it gives politicians the answers they want to see.

Stuart Nachman
Reply to  Tenuc
January 1, 2021 7:03 am

I remember an educator who specializes in computer and system design saying that there is not enough computer power in the world to accurately model turbulence in the atmosphere. Is there any evidence contradicting this?

Philo
Reply to  Stuart Nachman
January 1, 2021 8:55 am

Dr. Christopher Essex, Professor and Associate Chair, Department of Mathematics, University of Western Ontario, Canada, still has this YouTube link up:

https://www.youtube.com/watch?v=19q1i-wAUpY&feature=emb_logo

It’s kind of surprising it’s still up, but the basic principles are unchanged even with advances in computers. Computers are still many orders of magnitude behind the eight-ball.

RockyRoad
January 1, 2021 5:21 am

If the USGS would discuss the estimation variance associated with all the GCMs used for modeling climate, that would be a big step in getting back to reality.

Philo
Reply to  RockyRoad
January 1, 2021 9:13 am

A number of people have done this- I don’t have the references handy, but any statistician would agree. The multitude of inter-related equations in the models and the limited scale allowed by computing, after something like 4-5 iterations the error limits hit the boundaries of the possible calculations. Much like weather models, the errors possible build up quickly until they exceed the range of the results.

From what I’ve read, all the climate models limit the error after every interation in order to get results. There are many articles here on WUWT that evaluate the results. Look up Dr Roy Spencer in the column on the right margin. He has several articles on this.

Dave Fair
Reply to  Philo
January 1, 2021 2:44 pm

Add “take a dump” to “wag its tail.”

CO2isLife
January 1, 2021 5:33 am

Models are only as accurate as the data used in them. There needs to be an open-source climate model approach with extreme transparency. We’ve already seen the damage a few nutjobs were able to do at the FBI and CIA. Concentrations such power on unelected activists is simply insane, and the outcome is predictable.

bonbon
Reply to  David Middleton
January 1, 2021 7:44 am

Hoping for House action in 2 years? The USA might not exist as a constitutional republic by then.

Dave Fair
Reply to  David Middleton
January 1, 2021 2:54 pm

As much as I hate it when Senators use the filibuster to block clearly needed legislation, it is worth the cost to preserve our Constitutional Republic of the United States. We can always come back later to pass obviously needed legislation. Once something is wrecked by hasty action, however, it can become impossible to fix it. The various States were suicidal when passing the Constitutional Amendment to force direct election of Senators; the States lost control of their Republic to a monolithic Federal government (the Swamp). With unlimited authority to spend, the Feds can buy anything they want, including a compliant populace.

Clyde Spencer
Reply to  David Middleton
January 1, 2021 11:52 am

I hope that, if the proposal by Jim Reilly isn’t cancelled by the incoming administration, the responsibility for oversight and execution isn’t handed over to the biology/ecology ‘new hires.’

January 1, 2021 5:55 am

Humans have covered ten of thousands of square miles of the panet’s surface with black asphalt and dark roof shingles, consume enough energy to boil lake Michigan every three years, have created tens of thousands of square miles of heat islands ( cities ), and the temperature has only risen one degree K since the end of the little ice age. Remarkably stable.

Philo
Reply to  Dennis Topczewski
January 3, 2021 5:04 am

The earth gets more energy from the sun- some +/-50watts/sq.m. 24 hours a day, adjusted for the curvature of the earth.. That somewhat dwarfs man’s energy usage.

Tom
January 1, 2021 6:38 am

@David Middleton- So you think the November election can be characterized as a coup d’état? Have you some science to support that?

Tom Abbott
Reply to  David Middleton
January 2, 2021 5:50 pm

You did say that. 🙂

bonbon
Reply to  Tom
January 1, 2021 7:42 am

It is the first Green coup d’etat, in the Spectrum of color revolutions.

beng135
Reply to  Tom
January 1, 2021 10:03 am

The election was rigged. What would you call that?

Clyde Spencer
Reply to  Tom
January 1, 2021 12:08 pm

You might want to go to https://joannenova.com.au/ for a continuing series on all the suspect and anomalous events related to the past election. There is little doubt in my mind that there is a cloud over the legitimacy of the Biden ‘win.’ If the Democrats were concerned about executing a supposed mandate, they would be anxious to clear up that cloud by supporting open investigations into the accusations. Instead, Jiminy Cricket seems to be suffering from a bad case of laryngitis and the leftist media is in complete denial by whistling the repeated refrains of “unsupported” and “baseless.” Personally, I’m of the opinion that the Democrats are willing to live with the cloud over the legitimacy rather than partake in turning over rocks out of fear what they will find crawling out from under them. It will be interesting to see how many people join the planned demonstrations in DC on January 4-6.

Richard M
Reply to  Tom
January 2, 2021 7:47 am

The massive hacking of US government and industry by China is likely the how. With this level of access achieved it would have been easy to build a massive data set of eligible US voters. Eliminate those who voted in 2016-2018 and you now have a list of people unlikely to vote in 2020.

We already know from a China whistleblower that more than 5 million US ballots were printed in China. The hacking incident now gives us a source for the names on those ballots. They probably limited the fraud to battleground states to reduce the potential for discovery.

The US courts and Democrat government officials have been blocking all attempts to audit the election results which is the only way to discover the truth. We will likely have a new federal government elected by the CCP.

Tom Abbott
Reply to  Tom
January 2, 2021 5:48 pm

“So you think the November election can be characterized as a coup d’état? Have you some science to support that?”

Well, come Jan. 6, you will get to hear all the gory details about the presidential election as that is when the Congress is supposed to approve the State electors and that is when about half a dozen Republican U.S. Senators and over 100 Republican members of the House of Representatives will challenge the electoral votes of various contested States, and will lay out the reasons why they are contesting the votes of those States.

So have a little patience and everything will come out in the open.

Then we will see whether the Trump Team has the evidence to prove the election was stolen or not.

dh-mtl
January 1, 2021 7:24 am

David,

What you don’t mention about Figure 4 is that the temperatures prior to 1945 were all reduced by about 0.2 C in order to hide the decline between 1945 and 1960.

Without this adjustment there would have been no ‘global warming’.

dh-mtl
Reply to  David Middleton
January 1, 2021 9:21 am

I would refer you to these two articles:

‘Weather balloon data backs up missing decline found in old magazine’, WUWT March 18, 2010.
‘More on the National Geographic Decline’, WUWT March 18, 2010.

Jay Willis
Reply to  David Middleton
January 1, 2021 10:09 am

Rubbish David – since (as you say yourself) the adjustments are the same magnitude as the signal that is claimed for AGW – the data cannot be used to justify anything related to AGW – it really doesn’t matter what the ‘data’ are in that case – the observations to which you refer are the adjustments.

Frankly it is all utter rubbish in any case as the cause postulated as the only temperature record of any import is satellite based. These ground station data are only meaningful in respect of the area surrounding where each datum was observed. Really you ought to grow up as a scientific commentator and face the reality of the whole steaming pile of non-science that is the debunked AGW theory to date and stop half agreeing that the butchered land temp records are anything but fraud.

fred250
Reply to  Jay Willis
January 1, 2021 12:46 pm

“““These ground station data are only meaningful in respect of the area surrounding where each datum was observed””

Many are not even representative of that, being so badly sited and corrupted.

Perhaps you mean an area of say 10m².. ?

Jim Gorman
Reply to  David Middleton
January 1, 2021 5:25 pm

The difference is that those were anomalies of relatively static conditions in the earth. They were also localized to geographic areas. Finding oil didn’t begin with computing averages of anomalies over the entire globe and then forecasting from global averages where oil would be. What’s the term, oh yeah, peak oil. A global temperature is kind of like forecasting peak oil isn’t it?

The atmosphere is not static at all, in fact, it is non-linear and chaotic with constant natural cycles. Anomalies just don’t capture the natural variation of the whole globe.

Jim Gorman
Reply to  David Middleton
January 2, 2021 5:18 am

It is not the same. Your measurand is basically stationary in both time and location. That means you can take repeatable measurements of the same thing. You are using different measurement devices that hopefully give better precision.

Temperature measurements are not repeatable measurements. They are measured once and then disappear into the past forever. They are not stationary which has serious implications in the statistical treatments used to trend them.

Tim Gorman
Reply to  David Middleton
January 2, 2021 8:59 am

David,

How do you integrate data from independent, uncorrelated populations?

You can do so if the data samples are taken from the same static population. You cannot do so if the data samples are taken from different populations.

How do you integrate the data from the population representing the height of pygmies and from the population representing the height of Watusis? No amount of statistical analysis will come up with a meaningful answer from the integration of the data from the two independent populations. You may find a correlation between the two data sets because heights range from short to tall relative to each population (i.e. the slope of the two sets of data are similar) but that correlation is as meaningless as the correlation between the stock price of Tesla and Home Depot (both going up).

It’s like trying to take the temperature anomalies in St Paul, MN and the temperature anomalies in Houston, TX and integrating them and saying the mean represents a climate average. The two data sets may have a correlation but its only because of daily and seasonal variation. The anomalies tell you nothing about the individual climates let alone a “calculated average climate”.

Climate is determined by the total temperature profile at a location. There is no “global climate location” that can be measured so any calculated global climate is meaningless in the physical world.

Tim Gorman
Reply to  David Middleton
January 2, 2021 12:53 pm

I’m not sure what you mean by a common reference period. If you are talking about the same time frame then you are correct. If you are talking about the same common average temperature then that can’t be right. They would each have a different reference temperature during that time period because their climates are so different.

It’s like Willis trying to explain that the temperatures in Denver and Kansas City are correlated. They are correlated but not dependent. The temperature in Denver does not depend on the temperature in Kansas City and vice versa.

The temperature in St Paul does not depend on the temperature in Jacksonville or vice versa.

The correlation is caused by confounding variables – the turning of the earth and the tilt of the earth. Temperatures go up and down during the day as the earth turns and temperatures go up and down because the tilt of the earth changes in relation to the sun. So the temperatures go up and down in Denver just like in Kansas City causing a correlation between them but that doesn’t make them dependent on each other. And if they are not dependent then they need to be treated as independent.

It’s like Tesla and Home Depot stock both going up. They are correlated but not dependent. There are lots of confounding variables that cause the correlation.

If this were all to *really* be done in a meaningful manner every station would have its daily temperature profile integrated around a set temperature, e.g. heating and cooling degree-day values. Those degree-day integration values could be summed and graphed over time. A trend line going up outside the uncertainty interval would be assigned a “+” value and a trend line going down outside the uncertainty interval would be given a “-“. Those trend lines staying withing the uncertainty interval would be given a value of “u”.

Then add up all the “+”, the “-“, and the “u” entries. And see where the chips fall.

My guess is that the “u” quantity will far exceed the totals of the other two.

Tim Gorman
Reply to  David Middleton
January 3, 2021 6:39 am

Where is the uncertainty analysis in all of this? Every temperature anomaly should have an uncertainty interval attached. As I’ve pointed out elsewhere if the uncertainty intervals are included you simply cannot identify a trend unless the temperature change is greater than 1C. Anything less just gets lost in the uncertainty interval. You don’t know if the trend line is up, down, or stagnant.

You are assuming that all of these stated temperatures are perfectly accurate and are precise out to the number of the digits the calculations come up with. It’s a mathematician’s or computer programmer’s wet dream. All kinds of numbers on the number line and who cares if they actually represent physical reality!

Jeff Alberts
Reply to  David Middleton
January 3, 2021 6:04 pm

But you can’t then average St Paul and Jacksonville together. It’s just bad.

Jeff Alberts
Reply to  David Middleton
January 5, 2021 8:40 pm

When people talk about GAT, or a global mean temperature, they certainly are averaging unlike things together.

Tim Gorman
Reply to  David Middleton
January 6, 2021 6:01 am

Anomalies are useless to determine how much the actual climate is changing. An anomaly difference from 0.1C to 0.2C appears to be far worse climate-wise than the difference between 20.1C and 20.2C. (100% change vs .5% change)

So what has the anomaly actually told you? No one is going to notice a difference between 20.1C and 20.2C. But using anomalies to allow a misleading scaling provides the opportunity to speak about “catastrophic climate change”.

The use of anomalies makes the whole thing useless when it comes to determining the actual physical reality.

fred250
Reply to  David Middleton
January 1, 2021 11:17 am

Working with temperature data that is KNOWN TO BE CORRUPTED..

.. is a MUGS game, and a complete and utter WASTE OF TIME.

Absolutely nothing that comes from it has any meaning whatsoever.

Dave Fair
Reply to  fred250
January 1, 2021 3:16 pm

Well, as the old gambler said, when informed that the dice game was rigged, “But its the only game in town.” Denial of the CliSci game won’t make it go away, one must attack using the best data available.

Tom Abbott
Reply to  Dave Fair
January 2, 2021 6:08 pm

““But its the only game in town.”

I think that has a big influence on David. That’s the impression I get from him. He wants to play on the Alarmists’ field and beat them at their own game, which is to show that the temperature increases are small and not harmful, even assuming the alarmists have it right.

The only problem with that is there are no temperature increases, except those showing in the fraudulent Hockey Stick global surface temperature chart.

North America has been in a temperature downtrend since the 1930’s. So have all the other regions of the world.

The only thing that contradicts the downtrend is the fraudulent Hockey Stick global surface temperature chart.

Temperatures increased from 1980 to the present, but the temperatures never exceeded the high points in the 1930’s, and the temperatures are now 0.7C cooler than the 1930’s, so any way you look at it, North America has been on a temperature downtrend for decades and it is continuing. CO2 increases have had zero effect on North American temperatures.

None of the Alarmist rhetoric applies to North America. Or any other region, if you go by the actual recorded temperatures.

It’s only when you go by the fraudulent Hockey Stick global surface temperature chart, that one is led astray. That’s the only chart showing warming. That’s the only chart showing unprecedented warming.

No Hockey Stick chart = No CO2 crisis.

Tom Abbott
Reply to  David Middleton
January 4, 2021 5:53 am

I’m not sure what you are saying there, Dave.

The fraudulent Hockey Stick chart in fact, does show unprecedented warming. How many times has NASA and NOAA declared “the hottest year evah!? Answer: Numerous times.

The modern-era Hockey Stick chart (covering the last couple of hundred years) does not use proxy data, it takes actual recorded temperature readings and bastardizes them.

The models cannot explain the post-1975 warming without GHG’s because they are wrong. They are assuming too much.

And you are correct that in reality the warming is not unprecedented, but that’s not what the Alarmists and NASA and NOAA are saying.

There has to be unprecedented warming in order for the alarmists to offer CO2 as the next societal calamity. So they created unprecedented warming with their computer-generated fraudulent, modern-era Hockey Stick chart. It was created for political purposes to sell a Human-caused Climate Change narrative.

All the real surface temperature charts tell a different story. They say we are not experiencing unprecedented warming today, because it was just as warm in the recent past, and we have written records to prove it, and that means CO2 is a minor player in determining the Earth’s temperature.

Dave Fair
Reply to  Tom Abbott
January 2, 2021 10:12 pm

Get back to us when you have convinced the UN IPCC, academia, governments worldwide and etc. to completely change their methods of scientific analyses. Until then, its the only game in town. Deal with facts; don’t try to create your own alternative reality.

Tim Gorman
Reply to  Dave Fair
January 3, 2021 6:46 am

I think that’s what they told Galileo.

Dave Fair
Reply to  Tim Gorman
January 3, 2021 2:23 pm

As far as I know, Tim, nobody on this site is a budding Galileo.

I think the most promising counter to CliSci’s CAGW is to attack the UN IPCC climate models and their underlying assumptions – often and loudly. It looks like the CMIP6 representations in AR6 are going to duplicate the old AR5 RCP8.5 ‘business as usual’ trick.

Tom Abbott
Reply to  Dave Fair
January 4, 2021 6:01 am

“Deal with facts; don’t try to create your own alternative reality.”

Deal with facts? You mean accept the computer models as legitimate?

Alternate reality? You mean pointing out that past recorded temperatures put the lie to current computer models, and Human-caused Climate Change?

I’m just looking for the truth.

What is your opinion of the Hockey Stick chart? Legitimate, or not? Do you think it gives an accurate picture of reality?

I’m just wondering what reaity you live in.

Clyde Spencer
Reply to  David Middleton
January 1, 2021 12:14 pm

How do you know that the present day reading times capture the Min-Max as well as the old Min-Max thermometers that were read once or twice a day? Even modern temperature readings only use min-max for daily mid-range values (improperly called average or mean). I’m not convinced that the adjustments are warranted.

Dave Fair
Reply to  David Middleton
January 1, 2021 3:10 pm

While I tend to agree with you, David, there is documentation that the early CliSci cabal discussed (and apparently accomplished) getting rid of the 1940’s 1.5C “blip” in temperature. I don’t know all the ins and outs of measuring, recording and adjusting historical temperatures, but I’m not willing to bet the family jewels on the assertions of a group of activists. I support using Argo and buoys for SST and satellites and radiosondes for atmospheric temperatures (and humidity), keeping them seperate for analytical purposes.

Tim Gorman
Reply to  Dave Fair
January 2, 2021 6:17 am

The Argo floats have a +/- 0.5C uncertainty. In other words an interval of 1C. Any trend that can be identified from this uncertainty would have to exceed 1C. Have we actually seen a 1C increase or decrease in SST?

Satellites take one local measurement per day. These are then used to come up with some “global average”. That’s truly mathematical hokum.

Readiosondes are local measurements with an uncertainty interval. Combining all those local measurements with uncertainty into some kind of meaningful global (or even regional) average using gridding, smoothing, homogenization, etc without also including a combined uncertainty interval (ie. root sum square) makes such calculations as useless as those derived from a multiplicity of land based stations. At some point the uncertainty will overwhelm whatever you are trying to calculate, be it an absolute average or an anomaly average.

Dave Fair
Reply to  Tim Gorman
January 2, 2021 5:26 pm

As a practical matter, Tim, so what?

Tim Gorman
Reply to  Dave Fair
January 3, 2021 6:31 am

Because I get sucked into the extra taxes those who believe in CAGW require – even though my *local* climate shows cooling!

And when I look at the cooling/heating degree-days at various places over the earth they also show that cooling degree-days are trending down. I’ve attached just a couple.

If these places are actually cooling then where are the offsetting places that are warming? Remember, the warming places have to be warming more than other places are cooling in order for there to be an overall rise in temperature. Otherwise you just get zero change.

china.png
Dave Fair
Reply to  Tim Gorman
January 3, 2021 1:44 pm

Again, Tim, so what? Will the universe’s understanding of your pettifogging lower your taxes? Also, your graph has no information as to what you are measuring/showing. If I had any interest whatsoever, I guess I could locate Fengxiang Chengguanzhen on a map with the information your graph provides, but nothing else.

The issue is how do we stop the world’s governments from abusing science to achieve a form of global Marxism? Ask yourself: How do any of my postings help stop their activities?

Tom Abbott
Reply to  Dave Fair
January 2, 2021 6:16 pm

“While I tend to agree with you, David, there is documentation that the early CliSci cabal discussed (and apparently accomplished) getting rid of the 1940’s 1.5C “blip” in temperature.”

Yes, they were talking among themselves about how to erase the warmth of the Early Twentieth Century. After all, you can’t claim humanity is experiencing unprecedented warming today, if it was just as warm in the last century.

So the Climategate Charlatans and their Spawn erased that impediment to their Human-caused Climate Change/Global Warming scam. That way it made the temperature chart profile look like it has been getting hotter and hotter for decade after decade and now is the hottest time in human history. Just perfect for scaring people! And fooling others into thinking it represents reality.

Dave Fair
Reply to  Tom Abbott
January 2, 2021 10:39 pm

It is the post-1950 warming the UN IPCC claims is primarily AGW. No matter the absolute values, the early 20th Century warming trend and the large paleo trend fluctuations are firmly established in all temperature reconstructions for the past 10,000 years. CliSci is unable to deny the fact of natural climate variations; they gave up defending Mikey Mann.

CAGW is based on water vapor and clouds amplifying the small theoretical warming of CO2. One must hammer on the fact that there are no empirical data supporting that assertion. The UN IPCC climate models, especially the new CMIP6 models, are falsified by actual atmospheric temperature and humidity measurements using satellite and radiosonde instrumentation. Pound on your opponents’ weaknesses; don’t get involved in distracting, niggling fine points of mathematics.

Tom Abbott
Reply to  David Middleton
January 2, 2021 5:52 pm

“I haven’t seen anything that shows that the adjustments were improper or wrong.”

You need to spend some time over at Tony Heller’s website.

Shoshin
January 1, 2021 7:30 am

It’s always bothered me that with the coming of AGW hysteria, the CO2 Atmospheric Residence Time (CART) suddenly shifted from a generally accepted 10 years (meaning no problem re: AGW) to a value of 100 years (needed to fit the computer models) once AGW theories began to be monetized and marketed like house siding at Costco.

http://euanmearns.com/the-residence-time-of-co2-in-the-atmosphere-is-33-years/

Years ago I found a paper that documented a CART of close to ten years based on Carbon isotopes generated by French Nuclear tests in Polynesia in the early 1960’s. At the time of publication of the data, no one had heard or dreamt of AGW, so there was no thumb on the scale. I’ve looked for that paper several times since but Google seems to have memory holed it.

As Scott Adams of Dilbert fame notes, anytime there is a complex system with huge amounts of money at stake and no penalty for wrongdoing, cheating is a certainty.

Doesn’t matter if it’s AGM or an election. Cheating is guaranteed. To not acknowledge and guard against one must be spectacularly negligent. Either that, or in on the con.

Paul C
January 1, 2021 7:58 am

I think the labelling on the graph is unclear. No “observed” global mean temperature exists. Only a modelled global mean temperature can possibly be derived. The entire temperature “record” is a corrupted empirical dynamic model where the input data is also output data through a process of homogenisation and infill. Perhaps a “dynamic modelled” label rather than “observed” would lend some clarity to what is being plotted.
As an aside, after viewing the station based temperature records in this way, it follows that evaluation of the series can be performed simply by removing each one of the station records, running the full homogenation modelling process, and comparing the infilled station data to the actual station data. Every discrepancy highlights a failure of the modelling process, and the maximum discrepancy identifies the level of error. Being able to label a temperature series as having a known error of that value would give a good indication of the (lack of?) quality of the “data”.

Clyde Spencer
Reply to  Paul C
January 1, 2021 12:18 pm

Even the simple “calculated” would be better than “observed.”

Dave Fair
Reply to  Paul C
January 1, 2021 3:23 pm

It would seem to be a very simple computer modeling endeavour. Has it been done? If not, why not? If necessary, could we WUWT denizens do a cloud-funding to accomplish it? Although I have the skills to accomplish the task, I’m not enough of an activist to do the work.

Mickey Reno
January 1, 2021 9:24 am

That big rise in the 1930s was warmer than today, and the curve being displayed here that shows modern temps as warmer is in error, due to manipulation of the official record by climate alarmists. The Dust Bowl was a global event (global in the sense of carrying on across all land areas in the Northern Hemisphere. Temps have cooled since then, but bureaucrats have learned that they get better job security and better funding when the temperatures are adjusted to rise so they can scare people.

When the curve of the adjustments looks exactly like the curve of the impeding problem which will lead to our doom, science is no longer pertinent. Why do we not simply begin relying on space-based measures? Because they’re not hot enough. Big Brother/Information Retrieval/NOAA/NASA-GISS/Met offices in UK and Aus have tipped the curve. They cannot blame me because I don’t believe their lies. They can only blame themselves for lying. But lying pays so well, and gives them jobs, pensions (guaranteed, way better than private sector pensions), trips to exotic locales several times a year (pre-Covid, anyway) so there is little incentive to go back and find these data crimes, and correct the record and punish the bureaucrats (I include academics who are funded by government in this bucket). Roy Spencer and John Christie should be the leaders of climate change measuring, and somehow they are hated by the bureaucrats. That tells us all something important.

bethan456@gmail.com
January 1, 2021 9:39 am

“COUP” … 7 million popular votes and 74 Electoral College votes and 60+ failed lawsuits. Dear Mr. Middleton is living in the right wing echo chamber.

Rud Istvan
Reply to  bethan456@gmail.com
January 1, 2021 11:02 am

Bethan, you are ignoring three sets of facts.
Set one is historical indicia. Trump enthusiasm off the charts (rallies, spontaneous rallies) v basement Biden. Won FL and OH. Wond 18/19 bellweather counties. And so on.
Set two is Nov 3 indicia of fraud. Poll watchers excluded in Philly and Detroit despite court orders. NV AI signature matching set to 40 dpi. Historically about 1/10 the mails rejected compared to 2018. ‘Miraculous’ late night Biden vote dumps in 5 states…
Set three is hard evidence. In PA, ~200,000 more ballots counted that boters voted. In Antrim Country Mi, DVS switched 6000 votes to Biden from Trump, and when later forensically audited, turns out this was just the tipmof the iceberg.
All hard evidence will be exposed in the halls of Congress 1/6/21.

bethan456@gmail.com
Reply to  Rud Istvan
January 1, 2021 11:30 am

1) Enthusiasm is measured in the popular vote.
2) Ballots are not signed, so a mismatched signature doesn’t tell you which candidate got the vote. The red mirage was expected, so that trashes the “dump” theory.
3) All of your so called “evidence” was proffered to the judges in the lawsuits, and rejected. Barr said there was no widespread fraud.

Carlo, Monte
Reply to  bethan456@gmail.com
January 1, 2021 1:28 pm

1) Why does the SCYTL/Dominion software hold vote totals as floating point numbers rather than integers?

2) What is your plausible explanation for negative vote counts that shifted thousands of votes from R to D, as was documented in CNN live coverage on Nov. 3?

3) How do absentee ballot papers that supposedly were transferred out-and-back by physical mail have no fold creases?

Carlo, Monte
Reply to  bethan456@gmail.com
January 1, 2021 1:43 pm

And there is the issue of how Democrat officials changed election procedures, especially by flooding hundreds of thousands of unsolicited mail-in ballots and loosening signature verification requirements, bypassing the state legislatures in a completely unconstitutional manner, as explained by Mark Levin:

https://www.theblaze.com/op-ed/levin-on-january-6-we-learn-whether-our-constitution-will-hold

Tom Abbott
Reply to  bethan456@gmail.com
January 2, 2021 6:26 pm

“All of your so called “evidence” was proffered to the judges in the lawsuits, and rejected.”

No, not true. Not one judge has heard the evidence of the case. The judges have dismissed the lawsuits on technicalities. And most of the lawsuits were not filed by the Trump Team, but by other parties.

Barr did not say there was no widespread fraud, he said there was no evidence available to him at the time of the statement to say there was widespread fraud. Do you see the difference?

In about four days the Trump team will try to make the case to Congress that the election was stolen. This is where the evidence will be presented and this will be the first time the evidence has been presented.

Bill Rocks
Reply to  Rud Istvan
January 1, 2021 11:48 am

Intriguing summary Rud. I did not know that NV AI signature match was set to 40 dpi. The dramatic change in rejection of mail in votes is stunning, in total, and you have given some detail about how this may have happened in NV. Now I am paying attention to that and the other sets of facts.

Richard M
Reply to  bethan456@gmail.com
January 2, 2021 8:04 am

The CCP had the means, the opportunity and the motive. With their massive hack into the US systems last year they were able to access all the data they needed to create millions of fraudulent ballots. To think they would not use this information to help them eliminate Trump shows a level of naivety typical of fools.

Robert of Texas
January 1, 2021 10:23 am

This all seems to pretend the global land temperature data sets are correct and not full of misguided and unjustified corrections. First you need reliable data that is consistent with the analysis it will be used in – until you get that all your analysis on temperature change is Garbage (the amount of error it contains overwhelms any attempt at correlation)

Fix the data. That should be the emphasis. This likely means throwing out data from the majority of measurement sites but at least you would have higher quality data available. Build better suited measuring sites and keep them as pristine as possible. Quite pretending we understand climate change and approach it with an open mind willing to track down many causes and effects.

I am not going to hold my breath. People are blind when it comes to questioning their own beliefs.

Kpar
January 1, 2021 10:51 am

David, my only objection to your use of the phrase “November coup d’état” is that, so far, it is still an attempted coup d’état, and that it has been going on for more than four years…

fred250
January 1, 2021 10:58 am

Working with temperature data that is KNOWN TO BE CORRUPTED..

.. is a MUGS game, and a complete and utter WASTE OF TIME.

Absolutely nothing that comes from it has any meaning whatsoever.

Pat Frank
January 1, 2021 11:41 am

Non-predictive climate models calibrated against an inaccurate air temperature record.

Yup. There’s a science-based approach to policy advice, alright.

bigoilbob
Reply to  Pat Frank
January 1, 2021 11:47 am

Yet another self citation of your ridiculous, anti-statistical, article.. You know, the one with NO other relevant citations. Listen to your mom. Keep it up and you’ll go blind….

fred250
Reply to  bigoilbob
January 1, 2021 12:50 pm

Poor big oily blob

Your grasp of basic maths is….. .. basically NIL. !!

You are already willfully blind from your own self-gratifications..

Last edited 20 days ago by fred250
Pat Frank
Reply to  bigoilbob
January 1, 2021 2:02 pm

Thanks for your mindless view, boBob.

Do feel free to continue making your display.

Jay Willis
Reply to  bigoilbob
January 1, 2021 2:46 pm

sorry Bob but the first is a peer reviewed article in a leading scientific journal. So while peer review has come in for some justifiable criticism it sure beats some insulting rant from an anonymous commenter. If you have an objection to the work you should try to explain it fully and, on this forum at least, I’m sure it will be discussed with respect and interest.

bigoilbob
Reply to  Jay Willis
January 1, 2021 3:02 pm

sorry Bob but the first is a peer reviewed article in a leading scientific journal.”

He pimped it for 10 years before finding 2 complaint reviewers. Check out the article impact. Or rather the complete lack of it.

So while peer review has come in for some justifiable criticism it sure beats some insulting rant from an anonymous commenter.”

Hardly one “anonymous commenter”. Rather, EVERY post publication reviewer, including climate “skeptics”.

I do agree that Pat’s single peer reviewed publishing success is a validation of the peer review process. They would rather let a head scratcher get by, and then be politely ignored, than censor even the nonsense he espouses.

Jim Gorman
Reply to  bigoilbob
January 1, 2021 5:36 pm

Mindless ad hominem. If you can’t even repeat the “criticisms” in actual math form, you have no basis for your judgement. In fact, that is an argumentative fallacy.

“In order for the argument from authority to be considered a logical fallacy, the argument must appeal to the authority because of their qualifications, and not because of their evidence in the argument. Moreover, the argument can be fallacious if the authority lacks actual qualifications in the field being discussed.” Argument from authority or false authority – logical fallacies (skepticalraptor.com)

Tom Abbott
Reply to  Jim Gorman
January 2, 2021 6:37 pm

“Mindless ad hominem.”

Yep, that’s what it is. Bob needs to try a little anger management treatment.

Pat Frank
Reply to  bigoilbob
January 1, 2021 5:45 pm

Neither Davide Zanchettin nor Carl Wunsch are compliant reviewers. You demeaned honest reviewers merely to disparage my work, boBob. Despicable.

You also stoop to demean persistence in the face of obviously incompetent reviewers. I can document all of it, in their own words. Their incompetence also on view here.

Also yours, boBob, as you choose to stand with them.

What post-publication reviewer touched my work, boBob? Certainly not Nick Stokes. He doesn’t even understand instrumental resolution. Your forelock-tugging deference to him at moyhu bodes ill for any unbiased judgment from you.

Not Roy Spencer either, whose critique is a total mess and who not only admitted to not understanding propagation of error but demonstrated that condition.

Inept criticisms are a weak but appropriate hook for so vacuous a hostility as yours, boBob.

single peer reviewed publishing success“? Further peer-reviewed publications showing the general incompetence of the AGW consensus:

Uncertainty in the Global Average Surface Air Temperature Index: A Representative Lower Limit https://journals.sagepub.com/doi/10.1260/0958-305X.21.8.969
Imposed and Neglected Uncertainty in the Global Average Surface Air Temperature Indexhttps://journals.sagepub.com/doi/10.1260/0958-305X.22.4.407 (1 MB)

Negligence, Non-Science, and Consensus Climatologyhttps://journals.sagepub.com/doi/abstract/10.1260/0958-305X.26.3.391

And a Conference paper: Systematic Errors in Climate Measurements
https://www.worldscientific.com/worldscibooks/10.1142/10253

Let’s see, four is greater than one, isn’t it boBob. I suspect you’re capable of figuring at that level.

There isn’t a leg of the AGW corpus that survives critical scrutiny. It’s incompetence all the way down. And you’ve bought yourself a ticket on that ride.

bigoilbob
Reply to  Pat Frank
January 2, 2021 7:52 am

single peer reviewed publishing success“? Further peer-reviewed publications showing the general incompetence of the AGW consensus:”

My mistake. I actually searched and found nothing. I hope these articles were better thought out, and if so, had at least a little superterranean impact.

Pat Frank
Reply to  bigoilbob
January 2, 2021 11:25 am

bobBob: “My mistake. I actually searched and found nothing.

Google Scholar search on Patrick Frank climate

You didn’t search very hard boBob.

You’re apparently a professional and a grown man, boBob. But your arguments are invariably puerile. Aren’t you the least bit embarrassed for yourself?

fred250
Reply to  bigoilbob
January 1, 2021 8:05 pm

poor big oily blob,

…. compounding his ignorance at each time step !!

fred250
Reply to  bigoilbob
January 1, 2021 8:09 pm

Trouble is, big slimy blob..

Pat is so many magnitudes ahead of you when it comes to mathematical understanding, you seem like a pre-schooler to him,

…..and to anyone else with even a small understanding of basic error analysis.

You are a mathematical NON-ENTITY !!

Go back to junior high, and at least TRY next time. !!

Clyde Spencer
Reply to  bigoilbob
January 1, 2021 9:14 pm

Your remarks are unbecoming for even a teenager. They certainly don’t seem to support your claims of your work background.

bigoilbob
Reply to  Clyde Spencer
January 2, 2021 8:19 am

“Unbecoming” Unclutch those pearls. As opposed to your ongoing, undocumented, rants about a Biden “coup”?

And my “work background” is an actual petroleum OPERATIONS. To paraphrase Dan Ackroyd, “They want results”.

“Unbecoming” is the term you go to, when your ability to effectively rebut goes away.

Pat Frank
Reply to  bigoilbob
January 2, 2021 11:58 am

You’ve never demonstrated an ability to effectively rebut, bobBob, and yet you don’t go away.

Clyde Spencer
Reply to  bigoilbob
January 3, 2021 12:17 pm

“… ongoing, undocumented, rants about a Biden ‘coup’?” I’m afraid you are confusing me with someone else. However, that is consistent with your typically delusional claims. The point is, you have no credibility with the readers here and your ad hominem attacks do nothing to improve your lack of credibility.

fred250
Reply to  Pat Frank
January 1, 2021 12:49 pm

Well said, Pat 🙂

GIGO, at its WORST !

Pat Frank
Reply to  fred250
January 2, 2021 2:17 pm

Thanks, fred

Dave Fair
Reply to  Pat Frank
January 1, 2021 3:28 pm

And, yet, the models still can’t match the historicals!

Dave Fair
Reply to  Dave Fair
January 1, 2021 5:23 pm

I’m sorry, everyone. My comment was not meant to get in between those arguing statistical error; I have no expertise to add to the discussion. I was simply criticising the UN IPCC models as being inconsistent with past historical 20th Century temperature trends and those trends recorded after the cutoff date for model calculated history vs model projection. Forward or backward, model results are bunk. Models are not sufficient for fundamentally altering our society, economy or energy systems. And I sure the hell don’t trust Marxists to guide me to their grand future. GND? Ha!

Dave Fair
Reply to  Dave Fair
January 1, 2021 5:28 pm

Oh, and I have whole career (electrical engineer through utility GM/CEO) in planning, finance, design, construction and O&M of electric utility generation, transmission and distribution systems, with a small detour to make nuclear weapons.

Ulric Lyons
January 1, 2021 4:49 pm

All nonsense, stronger solar wind states in the 1970’s drove colder ocean phases and global cooling, and weaker solar wind states since 1995 have driven warmer ocean phases and global warming. The warm ocean phases are then amplified by the increase in lower troposphere water vapour and decline in low cloud cover which they drive.

Mr. Lee
January 1, 2021 5:50 pm

If you could just explain how they calculated the “change in temperature” or “mean temperature anomaly”, then I would be glad to comment on your article.

I won’t pretend to know what “mean global change in temperature” is. How did they calculate each point on that graph, so that I may understand what it represents?

Pat Frank
Reply to  David Middleton
January 1, 2021 8:44 pm

And assuming all the measurement error is random and just goes away, tra la.

Mr. Lee
Reply to  David Middleton
January 2, 2021 3:51 am

Thank you for responding.

So it is just a sum of raw temperature readings from the same stations, each weighted (aka fudge factored) the same for each station for each year…from 1900 until now? Yes? No?

Why introduce fudge factors at all, when their proper values (whatever that may mean) are unclear? In fact, the notion of a global mean temperature is unclear….the Earth’s surface is not homogeneous…in any way.

If a “mean” is meaningless, then why can’t I simply have a scatter plot of (raw) mean annual temperature vs. year for every station (all on the same graph) with perhaps a line that shows the median of the distribution for each year? That would mean something to me. Why haven’t I ever seen this?

How about an animation that displays a histogram of the annual mean temperature for all stations through time? What is wrong with that?

I mean seriously, why can’t I see the raw data visualized in a way that can be easily explained? Am I not good enough? Is the data high national security? Might I get the wrong idea?

Pat Frank
Reply to  David Middleton
January 2, 2021 2:19 pm

And they never include the root-sum-square combined uncertainties from the temps and the normals.

Last edited 19 days ago by Pat Frank
Clyde Spencer
Reply to  Mr. Lee
January 3, 2021 12:39 pm

It is a little convoluted in that they take daily Max’ and Min’ temps and calculate the mid-range value; then they may calculate weekly, monthly, or annual means from the daily mid-range calculations for stations. Missing station values (or apparent outliers) may be replaced by anomalies interpolated from stations up to 1200 km distance. There are problems such as cold fronts moving from west to east in the US, resulting in some western stations having colder temps than stations at the same elevation (for example, in the Great Plains) when readings are taken at the same time in different time zones. Then there are issues from adiabatic heating as air masses with different relative humidity descend the leeward side of mountains. While the baseline 30-year averages (different for different agencies) are assumed to represent past climate, there are problems with micro-climates and stations being retired or moved that call into question the veracity of the climate proxy baselines. In reality, the baseline is arbitrary and apparently chosen to accentuate the appearance of recent warming.

Last edited 18 days ago by Clyde Spencer
Jim Gorman
Reply to  David Middleton
January 2, 2021 5:45 am

Calculating a time series trend of non-stationary data is a major problem. Combining averaged data from different non-stationary data to obtain an overall trend is even worse.

Read the following from Duke University. An excerpt: “Statistical stationarity: stationary time series is one whose statistical properties such as mean, variance, autocorrelation, etc. are all constant over time.” Stationarity and differencing of time series data (duke.edu)

Tim Gorman
Reply to  Jim Gorman
January 2, 2021 1:11 pm

I love this quote from your site:

“Prediction is very difficult, especially if it’s about the future.”
–Nils Bohr, Nobel laureate in Physics

This quote serves as a warning of the importance of validating a forecasting model out-of-sample. It’s often easy to find a model that fits the past data well–perhaps too well!–but quite another matter to find a model that correctly identifies those patterns in the past data that will continue to hold in the future.”

Sunsettommy
Editor
January 1, 2021 7:00 pm

I see that some here get excited about a ANOTHER unverified climate model. I have decided not to read any more of them, since there are many HUNDREDS of them that are just a variation of each other, it is sliding into the pseudoscience arena.

How about getting back to the old style science research by trekking the actual areas under study, collect earth based data and build a hypothesis from it?

It was after all what Naturalists and Geologists did in the early days when they studied past climate effects.

fred250
Reply to  Sunsettommy
January 1, 2021 8:13 pm

If the model was anywhere near correct, There would only need to be one of them

The FACT that there are SO, SO MANY,

… and they give such a HUGE variation of results…

…. proves that NO-ONE really has even a basic clue.

Richard M
January 2, 2021 8:16 am

Can’t say I can agree with any analysis using global surface stations. Have we warmed? Sure, but it appears to be mainly due to natural ocean cycles. The latest UAH (December)is now starting to show the effects of ending the 6 year El Nino dominance.

If the current La Nina continues for 2 years, as many expect, we could find that much of the warming shown in those graphs has completely disappeared. Negative anomalies this summer are a definite possibility.

Jean Parisot
January 2, 2021 8:57 am

The graph should be labeled + 0.8deg caused by unknown, ignored or poorly measured factors – some of which might be caused by man (except for land use changes and observer bias which we aren’t allowed to consider).

Jeff Alberts
January 2, 2021 7:07 pm

But I digress.”

Willis? Is that you in disguise?