Lee esta nota de prensa en español aquí.
Earth’s global average surface temperature in 2021 tied with 2018 as the sixth warmest on record, according to independent analyses done by NASA and the National Oceanic and Atmospheric Administration (NOAA).
Continuing the planet’s long-term warming trend, global temperatures in 2021 were 1.5 degrees Fahrenheit (0.85 degrees Celsius) above the average for NASA’s baseline period, according to scientists at NASA’s Goddard Institute for Space Studies (GISS) in New York. NASA uses the period from 1951-1980 as a baseline to see how global temperature changes over time.
Collectively, the past eight years are the warmest years since modern recordkeeping began in 1880. This annual temperature data makes up the global temperature record – which tells scientists the planet is warming.
According to NASA’s temperature record, Earth in 2021 was about 1.9 degrees Fahrenheit (or about 1.1 degrees Celsius) warmer than the late 19th century average, the start of the industrial revolution.
“Science leaves no room for doubt: Climate change is the existential threat of our time,” said NASA Administrator Bill Nelson. “Eight of the top 10 warmest years on our planet occurred in the last decade, an indisputable fact that underscores the need for bold action to safeguard the future of our country – and all of humanity. NASA’s scientific research about how Earth is changing and getting warmer will guide communities throughout the world, helping humanity confront climate and mitigate its devastating effects.”
This warming trend around the globe is due to human activities that have increased emissions of carbon dioxide and other greenhouse gases into the atmosphere. The planet is already seeing the effects of global warming: Arctic sea ice is declining, sea levels are rising, wildfires are becoming more severe and animal migration patterns are shifting. Understanding how the planet is changing – and how rapidly that change occurs – is crucial for humanity to prepare for and adapt to a warmer world.
Weather stations, ships, and ocean buoys around the globe record the temperature at Earth’s surface throughout the year. These ground-based measurements of surface temperature are validated with satellite data from the Atmospheric Infrared Sounder (AIRS) on NASA’s Aqua satellite. Scientists analyze these measurements using computer algorithms to deal with uncertainties in the data and quality control to calculate the global average surface temperature difference for every year. NASA compares that global mean temperature to its baseline period of 1951-1980. That baseline includes climate patterns and unusually hot or cold years due to other factors, ensuring that it encompasses natural variations in Earth’s temperature.
Many factors affect the average temperature any given year, such as La Nina and El Nino climate patterns in the tropical Pacific. For example, 2021 was a La Nina year and NASA scientists estimate that it may have cooled global temperatures by about 0.06 degrees Fahrenheit (0.03 degrees Celsius) from what the average would have been.
A separate, independent analysis by NOAA also concluded that the global surface temperature for 2021 was the sixth highest since record keeping began in 1880. NOAA scientists use much of the same raw temperature data in their analysis and have a different baseline period (1901-2000) and methodology.
“The complexity of the various analyses doesn’t matter because the signals are so strong,” said Gavin Schmidt, director of GISS, NASA’s leading center for climate modeling and climate change research. “The trends are all the same because the trends are so large.”
NASA’s full dataset of global surface temperatures for 2021, as well as details of how NASA scientists conducted the analysis, are publicly available from GISS.
GISS is a NASA laboratory managed by the Earth Sciences Division of the agency’s Goddard Space Flight Center in Greenbelt, Maryland. The laboratory is affiliated with Columbia University’s Earth Institute and School of Engineering and Applied Science in New York.
For more information about NASA’s Earth science missions, visit:
-end-
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
A slightly OT comment but of instant relevance, concerning UK energy supply demand.
Average temp across the UK is now (at nearly 22:00 GMT) about minus 2 Celsius.
Meaning that all of Bojo’s proposed air-source heat-pumps will be frozen solid and not pumping any heat.
Meanwhile the windmills are producing 3.1GW as contribution to a, high for the time of day, UK consumption of 37GW
Heads are certainly going to roll….
PS and actually On Topic but I already told you all, my Wunderground personal weather stations on the western side of England (not the UK, just England) all recorded 2021 as among their )4th or 5th) coldest in their 20 year record
“Science leaves no room for doubt: Climate change is the existential threat of our time,” said NASA Administrator Bill Nelson.”
Is he psychotically deluded or just a liar?
Science says otherwise.
“World Atmospheric CO2, Its 14C Specific Activity, Non-fossil Component, Anthropogenic Fossil Component, and Emissions (1750–2018)”
https://journals.lww.com/health-physics/Fulltext/2022/02000/World_Atmospheric_CO2,_Its_14C_Specific_Activity,.2.aspx
Results in this paper and citations in the scientific literature support the following 10 conclusions.
According to Wiki: “Clarence William Nelson II (born September 29, 1942) is an American politician and attorney serving as the administrator of the National Aeronautics and Space Administration (NASA)”.
Nelson is an ex-politician (D) and Biden’s flunky but if that is the official view of NASA then that organization should never be trusted with assessing highly inexact surface temperature data.
These from NOAA for my little spot on the globe.
<pre> Record Highs for the day as listed in July, 2012 compared to what was listed 2007 Newer-April ’12 Older-’07 (did not include ties) 6-Jan 68 1946 Jan-06 69 1946 Same year but “new” record 1*F lower 9-Jan 62 1946 Jan-09 65 1946 Same year but “new” record 3*F lower 31-Jan 66 2002 Jan-31 62 1917 “New” record 4*F higher but not in ’07 list 4-Feb 61 1962 Feb-04 66 1946 “New” tied records 5*F lower 4-Feb 61 1991 23-Mar 81 1907 Mar-23 76 1966 “New” record 5*F higher but not in ’07 list 25-Mar 84 1929 Mar-25 85 1945 “New” record 1*F lower 5-Apr 82 1947 Apr-05 83 1947 “New” tied records 1*F lower 5-Apr 82 1988 6-Apr 83 1929 Apr-06 82 1929 Same year but “new” record 1*F higher 19-Apr 85 1958 Apr-19 86 1941 “New” tied records 1*F lower 19-Apr 85 2002 16-May 91 1900 May-16 96 1900 Same year but “new” record 5*F lower 30-May 93 1953 May-30 95 1915 “New” record 2*F lower 31-Jul 100 1999 Jul-31 96 1954 “New” record 4*F higher but not in ’07 list 11-Aug 96 1926 Aug-11 98 1944 “New” tied records 2*F lower 11-Aug 96 1944 18-Aug 94 1916 Aug-18 96 1940 “New” tied records 2*F lower 18-Aug 94 1922 18-Aug 94 1940 23-Sep 90 1941 Sep-23 91 1945 “New” tied records 1*F lower 23-Sep 90 1945 23-Sep 90 1961 9-Oct 88 1939 Oct-09 89 1939 Same year but “new” record 1*F lower 10-Nov 72 1949 Nov-10 71 1998 “New” record 1*F higher but not in ’07 list 12-Nov 75 1849 Nov-12 74 1879 “New” record 1*F higher but not in ’07 list 12-Dec 65 1949 Dec-12 64 1949 Same year but “new” record 1*F higher 22-Dec 62 1941 Dec-22 63 1941 Same year but “new” record 1*F lower 29-Dec 64 1984 Dec-29 67 1889 “New” record 3*F lower Record Lows for the day as listed in July, 2012 compared to what was listed 2007 Newer-’12 Older-’07 (did not include ties) 7-Jan -5 1884 Jan-07 -6 1942 New record 1 warmer and 58 years earlier 8-Jan -9 1968 Jan-08 -12 1942 New record 3 warmer and 37 years later 3-Mar 1 1980 Mar-03 0 1943 New record 3 warmer and 26 years later 13-Mar 5 1960 Mar-13 7 1896 New record 2 cooler and 64 years later 8-May 31 1954 May-08 29 1947 New record 3 warmer and 26 years later 9-May 30 1983 May-09 28 1947 New tied record 2 warmer same year and 19 and 36 years later 30 1966 30 1947 12-May 35 1976 May-12 34 1941 New record 1 warmer and 45 years later 30-Jun 47 1988 Jun-30 46 1943 New record 1 warmer and 35 years later 12-Jul 51 1973 Jul-12 47 1940 New record 4 warmer and 33 years later 13-Jul 50 1940 Jul-13 44 1940 New record 6 warmer and same year 17-Jul 52 1896 Jul-17 53 1989 New record 1 cooler and 93 years earlier 20-Jul 50 1929 Jul-20 49 1947 New record 1 warmer and 18 years earlier 23-Jul 51 1981 Jul-23 47 1947 New record 4 warmer and 34 years later 24-Jul 53 1985 Jul-24 52 1947 New record 1 warmer and 38 years later 26-Jul 52 1911 Jul-26 50 1946 New record 2 warmer and 35 years later 31-Jul 54 1966 Jul-31 47 1967 New record 7 warmer and 1 years later 19-Aug 49 1977 Aug-19 48 1943 New record 1 warmer and 10, 21 and 34 years later 49 1964 49 1953 21-Aug 44 1950 Aug-21 43 1940 New record 1 warmer and 10 years later 26-Aug 48 1958 Aug-26 47 1945 New record 1 warmer and 13 years later 27-Aug 46 1968 Aug-27 45 1945 New record 1 warmer and 23 years later 12-Sep 44 1985 Sep-12 42 1940 New record 2 warmer and 15, 27 and 45 years later 44 1967 44 1955 26-Sep 35 1950 Sep-26 33 1940 New record 2 warmer and 12 earlier and 10 years later 35 1928 27-Sep 36 1991 Sep-27 32 1947 New record 4 warmer and 44 years later 29-Sep 32 1961 Sep-29 31 1942 New record 1 warmer and 19 years later 2-Oct 32 1974 Oct-02 31 1946 New record 1 warmer and 38 years earlier and 19 years later 32 1908 15-Oct 31 1969 Oct-15 24 1939 New tied record same year but 7 warmer and 22 and 30 years later 31 1961 31 1939 16-Oct 31 1970 Oct-16 30 1944 New record 1 warmer and 26 years later 24-Nov 8 1950 Nov-24 7 1950 New tied record same year but 1 warmer 29-Nov 3 1887 Nov-29 2 1887 New tied record same year but 1 warmer 4-Dec 8 1976 Dec-04 3 1966 New record 5 warmer and 10 years later 21-Dec -10 1989 Dec-21 -11 1942 New tied record same year but 1 warmer and 47 years later -10 1942 31 ? Dec-05 8 1976 December 5 missing from 2012 list </pre>
DANG! I forgot “pre” doesn’t work anymore with the new format.
Basically, of the record highs and lows listed in 2007 and in 2012, about 10% have been “adjusted”. Not new records set, old records changed.
PS The all time recorded high was 106 F set 7-21-1934 and tied 7-14-1936. The all time recorded low was -22 F set 1-19-1994.
“pre” was a way to put up a table that didn’t show up as a confused mess like mine above! 😎
And where are the 22 years of Satellite Data, lower troposphere? I thought that Anthony Watts pretty much blew this manure away with his Surface stations.org work?!
“NASA compares that global mean temperature to its baseline period of 1951-1980”.
Hmmm, I wonder why they picked those years? Hmmm. They just happen to be the coldest decades of the 20th century. Oh, and don’t forget how they adjusted the temperatures prior to 1950 downward and the post 2000 temperatures upward to a) remove the hot years of the 1930’s and 1940’s from being in the warmest top 5 and b) make sure the 2010’s have the hottest years on record.
Mr. Layman here.
As I understand it, back in the 1920’s the standard of 30 years was set for the official basis of “average” because they only had 30 years worth of reliable data back then. So 30 year blocks of time.
(Of course, now they’ve decided those first 30 years of data aren’t reliable after all. Too warm so they’ve cooled them.)
There is no reason why they couldn’t switch to an “average” based on 60 years.
You could also form the baseline on all available data as well. For GISTEMP this is 1880 to 2021 or 142 years. The shape of the graph and trends remain exactly the same either you do it so it doesn’t matter if you pick 30, 60, or a 142 year period for the baseline.
Actually, one could use any number at all for a baseline. However, one of the benefits of using a 30-year period immediately preceding the current decade, is that it allows plotting the ‘anomalies’ in red and blue, above and below the baseline, respectively, accentuating the difference between the the two with a lot of red splashed on the graph.
I suspect you will say that it doesn’t matter. However, why don’t other sciences use that approach. They typically show their time-series as actual measurements, perhaps re-scaled or truncated to remove the white space. Alternatively, they may normalize the data in some fashion. The bottom line is that other sciences plot data in the same manner as mathematicians so that conventional metrics such as slope are obvious and not subjectively influenced by selection of colors. Objectivity seems to be of more importance to physicists than to alarmist climatologists.
I actually hate those red and blue plots so there will be no challenge by me if you want to criticize their use. In fact, I’ll jump right in there with you. I don’t care if it is NASA, NOAA, or whoever that’s doing it. It drives me crazy all the same.
Must be because use of the word “robust” is so passé nowadays.
The song and their mendacity are the same. No matter how many times global warming predictions have failed, their march towards overt false propaganda, rampant changing the past and their aggressive censorious despotic racketeering.
Religion and politics have always been hot topics. But I remember when the weather was a topic we could engage in without apocalyptic political hyperbole such as ‘existential threat’.
I also remember when scientists knew that questioning theories and conclusions was the essence of the scientific method and that nothing was ever absolutely and finally settled because we don’t know what we don’t know. Now the head of the supposedly premier scientific agency of the U.S. government tells us that “science leaves no room for doubt”. That’s the most ignorant, hubristic, unscientific statement one could ever make.
I think we could solve the “climate crisis” by assuring all data are reviewed by 50% Republicans and 50% Democrats…stable temps forevah!
The Figure shows the published GMST with, (left) the official uncertainty, and (right), the uncertainty when the systematic measurement errors from solar irradiance and wind speed effects are included in the land-surface data, and the estimated average errors from bucket thermometers, ship engine intakes and ARGO floats are included in the SST.
The book chapter is here.
The global GMST is hopelessly lost within the uncertainty envelope resulting from these errors.
And these don’t show the several degrees of variations caused by the standard deviations from all the averaging.
Your “book chapter” link is deed. Please provide the referenced expected value and “estimated average error” data, in usable form.
Thx in advance…
It’s been a day now. Any luck finding the requested expected value and “estimated average error” data, with distribution info, in usable form, for that second plot? I think it’s a decade old, and part of a paper. Albeit one with nada citations.
Or could it be that you would rather that no one check out your undocumented statement that “The global GMST is hopelessly lost within the uncertainty envelope resulting from these errors.“.
Naaaaaah…
It is easy to calculate the uncertainty envelope. Assuming most temperature measurements have an uncertainty of +/- 0.5C then what is the overall uncertainty when all the thousands of temperatures are added together to form the average?
The sum of all the temperatures will have an uncertainty of ẟTsum = sqrt[ (ẟT1)^2 + (ẟT2)^2 + … + (ẟTn)^2 ] = sqrt[ n(ẟT)^2 ] = ẟTsqrt[ n ].
The error band gets so wide so quickly that the sum of all the uncertainties hides any possible trend line – or even better the trend line can be anything!
You can, however, do your own calculation if you wish. Another way would be to use relative uncertainties in which case you would use the formula
ẟTavg/ Tavg = sqrt[ (ẟT1/T1)^2 + (ẟT2/T2)^2 + … + (ẟTn/Tn)^2 ]
but you probably won’t come up with anything very much different.
I must have been arguing with you for almost a year now, and you still show zero understanding that an average is not the same as a sum.
You have a vested interest that requires forcing the uncertainties of these tiny trends as small as possible. If real-world numbers were to emerge, your castle of playing cards collapses into statistical insignificance.
If the real numbers are going to bring my castle of playing cards down (whatever you think that is), it would be in my interest to exaggerate the uncertainty as much as possible.
TG said: “ẟTsum = sqrt[ (ẟT1)^2 + (ẟT2)^2 + … + (ẟTn)^2 ] = sqrt[ n(ẟT)^2 ] = ẟTsqrt[ n ]”
YES. That is correct per Taylor 3.16.
TG said: “ẟTavg/ Tavg = sqrt[ (ẟT1/T1)^2 + (ẟT2/T2)^2 + … + (ẟTn/Tn)^2 ]”
NO. That does not follow from Taylor 3.18. Fix the arithmetic mistake and resubmit your solution. If you would show your work that would be helpful. Take it slow. Show each step one-by-one. Watch the order of operations and/or put parathesis around terms to mitigate the chance of a mistake.
Your hat size has swelled a lot today.
Stokesian patience. You flatter every commenter you respond to, (whether they realize it or not) with the inference that, sooner or later, they’ll snap out of it.
“The error band gets so wide so quickly that the sum of all the uncertainties hides any possible trend line – or even better the trend line can be anything!”
Perhaps part of the way to calculate the standard error of a trend line, which is the relevant parameter here, but not to completion. For a trend line with only expected values, find an earlier comment by Willis E to get schooled. I can provide the expansion of that standard error with any kind of distributed errors for each datum. They don’t need to be equal. They don’t need to be symmetrical. They don’t even need to be the same distribution type. Now, I don’t have stochastic evaluation software like @RISK or Crystal Ball, so I can’t deal with correlations. But since uncorrelated data provides the largest standard errors for trends, I can provide the worst case answer(s).
But this all begs the larger question of why Pat Frank, who has claimed to have calculated, from the ground up, the data distributions – of whatever kind – that comprise the vertical “error bar” line segments that spread from every data point in his second plot, has not provided them. I have not questioned their provenance (here), but only wish to check out his data free claim that “The global GMST is hopelessly lost within the uncertainty envelope resulting from these errors.“. He might be correct, but with Pat Frank, “Trust but verify” is the best policy.
H/T to Willis E. This is how we calculate the standard error of a trend, with expected value only data.
Another canned equation, applied without understanding.
Well done blob, you Shirley showed them this time.
“…applied without understanding”
Pray elucidate. What am I missing? The equation is valid, and it’s underlying assumptions are commonly known. And I would only use it as a base, and widening it based on Pats data distributions.
Pat’s data can be trended, and the statistical durability of those trends can be quantified. At least if he has the usable density functions for it that he claims are the bases for his plot. He should be eager to post them.
AGAIN, bdgwx and Bellman are doing the heavy lifting here. All I want to see is how the statistical durability of his trends square with his backup free assertion.
You cannot increase knowledge by averaging. This is effectively what you are arguing.
We can increase our knowledge by using more data, properly. That, is effectively what we are arguing.
No, you can not increase knowledge of individual measurements by using more and more independent measurements of different things.
More moving of goal posts. Carlo said averaging doesn’t increase knowledge, then you change this to averaging doesn’t increase knowledge of individual measurements.
I’d still disagree with this. Knowing the average of a population does increase the knowledge of individual measurements – it tells you if they are above or below average.
Looks like you’ve chosen to follow bzx on the road down to Pedantry.
The mean tells you nothing about a distribution. Where did you learn about statistics?
You need to know the standard deviation or variance so you know how well the mean represents the spread of the data.
Look at the screen capture. It a sampling simulation. The basic distribution (top) is equivalent to Southern Hemisphere and Northern Hemisphere in summer time. What does the arithmetic mean tell you about the distribution? Not much.
Do me a favor and take the standard deviation of the two sample distributions and multiply them by the sample size. You’ll see they equal the population SD. That is where the equation SEM = σ / √N originates. The standard deviation of the sample means is the SEM!
That is why dividing by the number of data points is ridiculous. Calling stations a sample doesn’t meet the requirement for samples having the same mean and variance as the population. If you want to deal with statisics, at least do so properly
And the goalposts shift again. No, a mean does not tell you what the distribution is, it doesn’t tell you a lot of things and you can spend the next few comments itemizing all those things – but it does not explain why you think a mean tells you nothing.
An arithmetic mean tells you the central value of a range of numbers. That is all. It tells you nothing about the shape of the distribution.
A variance or standard deviation is needed to describe the variation/dispersion of the data in a distribution.
In a measurement of a single thing with the same device, multiple times, the mean will define the true value, if and only if the distribution of the individual measurements form a Gaussian distribution.
Finally an admission that a mean does tell you something, though it won’t necessarily tell you what the “central value of a range of numbers” is.
Of course a mean on it’s own does not tell you what the distribution is, nor for that matter does the mean and standard deviation. There are lots of things that won;t tell you the distribution of a set of numbers, but can still be useful. The sum of the numbers wont tell you the distribution, but nobody here is insisting that adding numbers tells you nothing.
“In a measurement of a single thing with the same device, multiple times, the mean will define the true value, if and only if the distribution of the individual measurements form a Gaussian distribution.”
Wrong on all counts. The mean won’t define the true value any more than a single measurement will. It will just be more precise assuming there are random errors in the measurement. And it can do this regardless of the distribution of the measurements.
I thought you were just making stuff up but I now realize your really pulling stuff out of your arse. This doesn]t even deserve a response.
The mean tells you nothing about any individual point of data. It is the simple center if a group of numbers. The mean may not even be the same as any data point. You have no idea of the distribution shape or the range of values used to calculate it. You have no idea if the distribution is skewed or to what extent.
Only the mean and standard deviation together will tell you the dispersal of data surrounding the mean.
That is the the uncertainty in the trend assuming the data is 100% accurate with no uncertainty in the data points themselves!
It does not address the uncertainty in the data used to generate the trend.
When are you going to learn that the uncertainty in the data is going to make a trend line very, very wide, such that you can not “assign” a definite value to the variable because it can be any value the line touches.
If you are using a point like 75F from 1910 the base uncertainty will be +/- 0.5. That requires any trend line using that value be 1F wide.
“That is the the uncertainty in the trend assuming the data is 100% accurate with no uncertainty in the data points themselves!
It does not address the uncertainty in the data used to generate the trend.”
No. Pat Frank’s claim is that he has thrown in the kitchen sink and included every potential source of uncertainty. I am willing to include his total uncertainty for every data point that Pat Frank is claiming, to validate his data free claim that the relevant trends are statistically unjustifiable. AGAIN, I am not disputing their provenance (here). I just want to check them out.
Dr. Frank’s dog must’ve eaten is e homework again…..
“When are you going to learn that the uncertainty in the data is going to make a trend line very, very wide, such that you can not “assign” a definite value to the variable because it can be any value the line touches.”
Uncertainty in the data will indeed “widen” the standard errors of any of it’s trends. What is missing here is the simple evaluation of how much. Pat Frank expects acceptance of a claim based only on Rorschachian eyeballing. He might be right, but he should provide his numbers and allow proper checking.
This is a baldfaced lie, blob.
“This is a baldfaced lie, blob.”
First, nope.
Second, you’re both goal post moving and losing track of my actual data request. I simply want to see the distribution information that resulted in Pat Frank’s “error band” line segments. Evaluation of them either backs up his claim of trend “unknowability” or it doesn’t.
I suspect you think you know the answer already, from your deflections from Dr. Frank’s radio silence on providing the decade old distribution data. Assuming it even exists, and that the line segments aren’t just rulered into the cartoon. Of course, there might be “But wait, there’s more!”, after that. But let’s do this first.
Reluctance to provide this data channels early ’80’s oilfield experience. Both HTC and Christensen used to give you a bottle of (good) whiskey for every one their bits you ran. We would always prank our back to backs (the drilling supervisor who spelled you on your days away) by telling the bit salesman:
“Don’t give Fred any whiskey when he comes on.”
“Why?”
“Hey, dumb ****! He’ll drink it!”
Yep. The paper analyzed the effects of the 4 W/m2 cloud uncertainty.
A single source of uncertainty.
Rest of your rant unread.
“The paper analyzed the effects of the 4 W/m2 cloud uncertainty.”
Again, nope. Here are multiple, systematic (his term) errors mentioned in the abstract of the paper that Pat Frank linked us to. I.e., the one in this post, and the source of his figure 2 cartoon:
https://www.worldscientific.com/doi/abs/10.1142/9789813148994_0026
“Rest of your rant unread.”
I doubt it. More like a deflection from your earlier deflection.
Another unread blob rant.
Here is what you typed.
Here is what I replied.
My answer has nothing to do with anything but your assertion. The equation you showed has no allowance for any uncertainty or error in the measurement data. It only evaluates how well the trend matches the data points used, i.e. the data is assumed to be 100% accurate with no uncertainty. That is, they are treated like counting numbers, not measurements.
If your data is integer values, the trend line should be at least +/- 0.5 wide. This would cover any 1/1000th measurement multiple times over!
It has the word “error” in the title so it tells them everything they want to know.
You’re a smart guy so you may have already figured this out, but Bellman and I discovered that Pat’s method boils down to these calculations.
(1a) Folland 2001 provides σ = 0.2 “standard error” for daily observations.
(1b) sqrt(N * 0.2^2 / (N-1)) = 0.200 where N is large (ie ~365 for annual and ~10957 for 30yr averages)
(1c) sqrt(0.200^2 + 0.200^2) = 0.283
(2a) Hubbard 2002 provides σ = 0.25 gaussian distribution which Pat calculates to 3 decimal places as 0.254 for MMTS daily observations.
(2b) sqrt(N * 0.254^2 / (N-1)) = 0.254 where N is large as in (1b)
(2c) sqrt(0.254^2 + 0.254^2) = 0.359
(3) sqrt(0 283^2 + 0 359^2) = 0.46
Here are our concerns.
1) Folland and Hubbard seem to be describing the same thing so I question why they are being combined. Though Folland is rather terse on how or even exactly what his 0.2 figure even means.
2) The use of sqrt(N*x^2 / (N-1)) to propagate uncertainty into an annual or 30yr average implies a near perfect correlation for every single month in the average which I question. There’s almost certainly some auto-correlation there, but there’s no way it’s perfect.
3) There is propagation of uncertainty for the gridding, infilling, averaging steps nor is there any discussion of the coverage uncertainty or the spatial and temporal correlations that might require upward adjustments to the uncertainty.
Here are good references for the uncertainty analysis provided by several other groups each of which provide significantly more complex uncertainty analysis and significantly different results.
Christy at el. 2003 – Error Estimates of Version 5.0 of MSU–AMSU Bulk Atmospheric Temperatures
Mears et al. 2009 – Assessing uncertainty in estimates of atmospheric temperature changes from MSU and AMSU using a Monte‐Carlo estimation technique
Rhode et al. 2013 – Berkeley Earth Temperature Averaging Process
Lenssen et al. 2019 – Improvements in the GISTEMP Uncertainty Model
Haung et al. 2020 – Uncertainty Estimates for Sea Surface Temperature and Land Surface Air Temperature in NOAAGlobalTemp Version 5
You give me far too much credit for this. All I did was query the use of a misapplied equation from Bevington, and tried to find out what assigned uncertainty meant, and why it wasn’t described in the GUM.
It is addressed in the GUM.
JCGM 100:2008
4.3.2 The proper use of the pool of available information for a Type B evaluation of standard uncertainty calls for insight based on experience and general knowledge, and is a skill that can be learned with practice. It should be recognized that a Type B evaluation of standard uncertainty can be as reliable as a Type A evaluation, especially in a measurement situation where a Type A evaluation is based on a comparatively small number of statistically independent observations.
But when I asked if this meant type B uncertainty I was told, no it wasn’t anything used in the GUM, the GUM didn’t cover all types of uncertainty, etc.
If the argument is that assigned uncertainty is Type B, the question then is why you don’t apply the same propagation rules to it. Equation (10) in the GUM section 5.1.2 specifically says it applies to both types of uncertainty:
Here’s what Pat Frank said, (my emphasis.)
And here’s Tim Gorman (his emphasis)
…
Then when I pointed out that Carlo, Monte said they were the same, Tim replied
Why do you need the uncertainties of these linear plots to be as small as possible?
What linear plots? I’m just trying to figure out how Pat calculates the uncertainty in the annual anomaly average.
If by linear plot you mean the linear regression, I’ve been arguing for some time that you have to look at the uncertainty and that this is generally larger than you think, because of auto correlation etc. I tend to use the Skeptical Science Trend Calculator because it shows larger uncertainties.
You on the other hand seem to want to ignore all uncertainties when it comes to trends you like, such as Monckton’s cartoon pause, or anyone claiming they has been cooling after the last few years.
Why don’t you see the study for how the uncertainty was determined. What was used to determine the “assigned uncertainty”?
Because it keeps making assertions with no explanation and Pat was on hand so I thought it would be easier to ask him.
There’s little point wading through the whole document, most of which I’m unlikely to understand, when there’s already what appears to be a poor assumption underlying the argument.
Total nonsense.
What I said was that the GUM is NOT the end-all-be-all for the subject. It can’t be.
Go read the title (again). Duh.
I quoted the comments I was talking about. Either the “assigned” uncertainties of which Pat speaks are Type B or they are not. It’s up to you lot to come up with a consistent explanation. All I want to know is where is the explanation for how they propagate.
I am shocked by the revelation that the experts on GUM, NIST, equations etc. do not understand this.
Shocking information.
You need to read the following very critically and with the purpose of expanding your knowledge.
This should tell you that a measurand can be determined by a combination of other measured variables. In order to do so, you must be able to define a function that determines the value of “Y”. Something like the Ideal Gas Law where
PV = nRT or
for the equation of continuity for incompressible fluids
A1V1 = A2V2
I have asked you to provide the function you are using to determine the value of Y and you have yet to do.
Do some soul searching. The GUM and Dr. Taylor both deal with real physical measurements of a MEASURAND and how to determine the uncertainty associated with real physical measurements.
I suspect the best you will have for a function that defines GAT is calculating a mean or average.
THAT IS NOT DETERMINING A PHYSICAL QUANTITY OF A MEASURAND!
Consequently, none of this even comes close to applying most of the GUM or any other metrology technique for determining uncertainty. At best you are using statistics to try to prove a theory. As such, you need to insure that you are following the assumptions necessary for these statistical calculations when computing a GAT.
The very first decision you must make is if you are using samples (i.e. stations) of a larger population or if you are using the entire population. That very much affects the statistical parameters and their evaluation.
I have also tried to get you to accept Significant Digits rules when dealing with physical measurements. You continue to treat measurements as counted numbers, they are not.
Here is an explanation from a physics course at Bellview College. https://www.bellevuecollege.edu/physics/resources/measure-sigfigsintro/a-uncert-sigfigs/
Here is a presentation from Purdue Univ.
http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch1/sigfigs.html
“It is important to be honest when reporting a measurement, so that it does not appear to be more accurate than the equipment used to make the measurement allows. We can achieve this by controlling the number of digits, or significant figures, used to report the measurement.”
And from Washington Univ. at St. Louis.
http://www.chemistry.wustl.edu/~coursedev/Online%20tutorials/SigFigs.htm
“By using significant figures, we can show how precise a number is. If we express a number beyond the place to which we have actually measured (and are therefore certain of), we compromise the integrity of what this number is representing.”
True, which is why no one is doing that. But the value can be used to calculate others with more usable, statistically valid sig figs.
If we count you as one when you’re alive, and zero when you’re dead, we can’t justify anything on the r.h.s. of the decimal point. But if we look at the trend of death rate for your city, we would take that same data for all city residents and arrive at a trend with several, justifiable digits to the right of the decimal point.
Hey, I posted references that justify my assertion. All you can do is say those references don’t matter because you and others aren’t doing that.
“True, which is why no one is doing that.”
What a simplistic claim! Come on man! How many graphs have you posted showing anomaly values in the early 20th century with 2 or 3 decimal places. You can’t get that from measurements recorded as integers!
See the attached graph. The point have at least 2 decimal points and probably 3. How do you get that from integer resolution?
Show us the Significant Digit rules from an accepted source that let you do that!
Here us the graph for the above.
“See the attached graph. The point have at least 2 decimal points and probably 3. How do you get that from integer resolution?
Show us the Significant Digit rules from an accepted source that let you do that!”
And predictably, you haven’t provided the data to back up your cartoon. Do so, and I will be happy to.
Did you not examine the graph? What does it show for the temperature in 1881? How many decimal places?
They scale has one decimal place to the right. But as you should know, the data need not have that as well.
Folks, first Pat Frank, and now Jim Gorman. Both with the usual reticence about actually providing data. They prefer cartoons that they can do Rorschachian interpretations with…
I’ll ask again! What does the graph have for the temperature value in 1881? How many decimal places? How about 1882? Are the values the same or different in the second decimal place?
Is it beyond you to say that 1881’s temp is shown as approximately -0.25? and probably even a third decimal place.
“Is it beyond you to say that 1881’s temp is shown as approximately -0.25? and probably even a third decimal place.”
That was exactly my point. I have no idea what you’re getting at. If you’re saying that temperature measuring processes for 1881 individual measurements make this many rhs decimal places impossible, I say audit Engineering Statistics 101 at your local CC. The spatially weighted average of enough of those individual measurements, with known error bands, can easily justify more of those significant figures than the individual measurement(s). You seem hysterically blocked on this fundamental truth, understood for over a century. I can now better understand what patient Bellman has been tuting you on, over and over.
Or are you just confusing me with another poster…?
I answered you post.
You can not do this. Please show a reference from a certified lab or University that allows what you assert. Better yet, provide a rule as to how the number of decimal places is decided upon.
Quality control people, certified labs, and machinists all understand this isn’t possible. If it was higher and higher precision measuring devices would not be needed.
Why do you think the NWS spent billions changing thermometers from LIG to higher resolution devices?
I’ll give you an example that quality control people would be familiar with. Let’s say you process 10,000 rods of a length that is required to have a 0.01 mm tolerance.
Your proposal is to measure each one and find the average. That would be wrong. The machine, or worse machines, making the rods as they wear could be making rods that are further and further out of spec but if the errors were random, your average would still be ok. Sooner or later, you would find out from customers that your product will no longer work for them.
In fact, by using your process, you could even use measuring devices that don’t have sufficient resolution to precisely measure the individual rods since you say you can increase precision by averaging multiple measurements of different rods.
Quality people know that you must sample sufficient individual rods and measure each accurately, i.e., with resolution to insure they meet specs, to insure all the rods meet requirements. You simply can’t increase precision by averaging. Errors can grow and you’ll never know it by averaging.
“The machine, or worse machines, making the rods as they wear could be making rods that are further and further out of spec but if the errors were random, your average would still be ok.”
You’re confusing/conflating the standard error of any one measurement with the loss of accuracy versus repeated operation, from equipment wear. It’s no wonder that you fail to understand basic statistical laws.
I am confusing nothing. I try to show you that measures of different things can not be used to reduce precision.
There is no loss of accuracy in measurements in my example. The measuring devices are not what wears and changes. It is the rods themselves that are changing.
Just what statistical laws has this example violated? It is a real world example of quality control and the statistics needed to maintain quality. Please refute what I have given with some statistical laws.
“The measuring devices are not what wears and changes. It is the rods themselves that are changing.”
The loss of accuracy here comes from treating the run as a constant value, instead of a trend of changing values. Ever since we began making interchangeable parts, we noted this process and accounted for it. The average size of each run is:
With more random sampling you will converge upon that correct expected value closer and closer. Also if you include subsequent runs with new cutting tools between runs, the convergence will continue. To wit, if you sampled 1000 items, your convergence on the actual average value would be 10* closer than if you just sampled 10. Hence, the justification for that extra sig fig.
“Better yet, provide a rule as to how the number of decimal places is decided upon.”
It would be a generalized formulation of the statistical rule that the sum of the variance is equal to the variance of the sum. For measurements with equal standard deviations, 100 or more of them would justify another sig fig for their average. But it would also apply to unequal standard deviations for differing measurement mechanism, different distributions, whatever. Anything that reduced the averaged standard deviation by a factor of 10 or more from the smallest standard deviation of any datum within the averaged data set would justify it. Same for the next sig fig, with the minimum number of required data points at 10000.
Do you see what I’m doing? Yet?
Word salad with no hard and fast rules or no math.
Show some valid references that confirm the math you are espousing.
30 seconds I’ll never get back. The only adder here is notation of the fact that, once sampling without replacement the standard deviation of the average becoming an order of magnitude lower than that of the sample with the smallest one – if they are not identical – then that justifies another sig fig. Their is an engineering term for this. “By inspection”.
Ignorance isn’t the problem here. It’s more the willful effort to remain so….
https://www.investopedia.com/terms/l/lawoflargenumbers.asp#:~:text=The%20law%20of%20large%20numbers%2C%20in%20probability%20and%20statistics%2C%20states,average%20of%20the%20whole%20population.
Once again you are dealing with numbers not measurements.
No one is going to believe your assertions without some references.
I have provided numerous references supporting mine from well known Universities. You need to do the same to provide some references of your own.
“I have provided numerous references supporting mine from well known Universities. You need to do the same to provide some references of your own.”
I agree with everything that your references say, because we are saying the same things. None of them however, discuss significant figures w.r.t. standard deviations.
Here’s one that does.
https://www2.chem21labs.com/labfiles/jhu_significant_figures.pdf
Here’s an excerpt:
Although the maximum number of significant figures for the slope is 4 for this data set, in this case it is further limited by the standard deviation. Since the standard deviation can only have one significant figure (unless the first digit is a 1), the standard deviation for the slope in this case is 0.005. Since this standard deviation is accurate to the thousandths place, the slope can only be accurate to the thousandths place at the most. Therefore, the slope for this data set is 0.169 ± 0.005 L K-1. If the standard deviation is very small such that it is in a digit that is not significant, you should not add additional digits to your slope. For example, if the standard deviation in the above example was two orders of magnitude smaller, you would report it as 0.1691 ± 0.00005 L K-1. Note that here the slope has its maximum number of significant digits based on the data, even though the standard deviation is in the next place.
If you look at the figures above this excerpt. in the link, you can see how he demonstrated how to increase the number of significant figures in a slope if his standard deviation was small enough to justify it. The same rule applies to averaging. Since standard deviations tend to drop when averaging, with more data, whenever they get low enough so the first digit is one place to the right, then the average measurement may be one sig fig more precise.
Balls in your court. Feel free to provide anything that rebuts this.
Oh, BTW, in spite of Pat Frank’s radio silence, I found a non paywalled version of his 2010 paper. His bottom line appeared to be a constant standard deviation of 0.46 degC in every annual reading from 1880 to present.
The good news for Pat s that his singular error bars, replicated by no one, indeed raised the standard trend errors. By a factor of ~5 for the 1980 on data, and by a factor 4 for the earlier data. The bad news for the good Dr. is that they still left us a 1980-2018 slope (newer data used to avoid being accused of not using “pause” data) of 1.75 deg/century with a standard error of 0.69 deg/century, versus a pre 1980 slope of 0.36 deg/century with a standard deviation of 0.17 deg/century. Put it all together and the chance that the change in trend was zero or less is all of 2.5%. The chance that the change in slope is >1 deg/century is 70.5%.
Those Rorschachian eyeballs might need some checkin’….
Gosh Pat. Four days now and nada from you. What does it take to get you to provide usable y scale distribution data for a 10 year old cartoon that you presented.?
I’m not even saying that your data free “hopelessly lost within the uncertainty envelope” claim is incorrect. I’m just interested to see if it bears normal scrutiny. What’s the harm?
You are in no position to make demands, bighatsizeblob.
What “demand”? I can’t “demand” anything from Pat Frank. He has a comfortable government sinecure from which he is functionally fire proof.
Dr. Frank should be tickled pink to provide the data that backs up his assertion. Unless…..
What “demand”? I can’t “demand” anything from Pat Frank. He has a comfortable university/government sinecure from which he is functionally fire proof.
Dr. Frank should be tickled pink to provide the data that backs up his assertion. Unless…..
Have a nice day, blob.
With upcoming reversals of ocean cycles and likely decreasing temperatures, it occurs to me that we should start watching for the adjustment bureau to start removing adjustments they have made to the last 40 years of records that increased the temperatures, saying they have a new algorithm or “fixed” the old one, thereby cooling the recent past and by magic there is no current cooling.
Am I overthinking it?
Clearly cannot trust climate Scientology?
Griff, a question.
If this year was a tipping point in extreme weather events, and it turns out that it was cooler, doesn’t that mean it’s cooling that is the danger?
ok, and the problem is …?
1880 is at, or nearly at, the end of the “Little Ice Age”; George Washington hauling cannon across the Patomac and all that.