This post provides updates of the values for the three primary suppliers of global land+ocean surface temperature reconstructions—GISS through September 2018 and HADCRUT4 and NOAA NCEI (formerly NOAA NCDC) through August 2018—and of the two suppliers of satellite-based lower troposphere temperature composites (RSS and UAH) through September 2018. It also includes a few model-data comparisons.
This is simply an update, but it includes a good amount of background information for those new to the datasets. Because it is an update, there is no overview or summary for this post. There are, however, simple monthly summaries for the individual datasets. So for those familiar with the datasets, simply fast-forward to the graphs and read the summaries under the headings of “Update”.
INITIAL NOTES:
It’s been almost two years since I’ve published this update. In that time, in July 2017, RSS (Remote Sensing Systems) revised their lower troposphere temperature data with their version 4.0 data. See the RSS webpage FAQ about the V4.0 TLT Update for more information. We briefly discussed the impacts of these changes recently in the post The New RSS TLT Data is Unbelievable! (Or Would That Be Better Said, Not Believable?) A quick Introduction (Cross posted at WattsUpWithThat here.) Dr. Roy Spencer also discussed the revised RSS v4.0 TLT data in his post Comments on the New RSS Lower Tropospheric Temperature Dataset, which was cross posted at WattsUpWithThat here.
IMPORTANT NOTE: The recent revisions to the RSS Lower Troposphere Temperature data may have brought the warming rate of their data more into line with climate model projections globally, but in the all-important tropics, the revisions had little impact on the disparity between models and data, as discussed and illustrated in Dr. Roy Spencer’s post Warming in the Tropics? Even the New RSS Satellite Dataset Says the Models are Wrong. [End important note.]
Back in 2016, we discussed and illustrated the impacts of the adjustments to surface temperature data in a number of posts:
- Do the Adjustments to Sea Surface Temperature Data Lower the Global Warming Rate? (WattsUpWithThat cross post is here.)
- UPDATED: Do the Adjustments to Land Surface Temperature Data Increase the Reported Global Warming Rate? (WattsUpWithThat cross post is here.)
- Do the Adjustments to the Global Land+Ocean Surface Temperature Data Always Decrease the Reported Global Warming Rate? (WattsUpWithThat cross post is here.)
The NOAA NCEI product is the new global land+ocean surface reconstruction with the manufactured warming presented in Karl et al. (2015). For summaries of the oddities found in the NOAA ERSST.v4 “pause-buster” sea surface temperature data see the posts:
- The Oddities in NOAA’s New “Pause-Buster” Sea Surface Temperature Product – An Overview of Past Posts
- On the Monumental Differences in Warming Rates between Global Sea Surface Temperature Datasets during the NOAA-Picked Global-Warming Hiatus Period of 2000 to 2014
Even though the changes to the ERSST reconstruction since 1998 cannot be justified by the night marine air temperature product that was used as a reference for bias adjustments (See comparison graph here), and even though NOAA appears to have manipulated the parameters (tuning knobs) in their sea surface temperature model to produce high warming rates (See the post here), GISS also switched to the new “pause-buster” NCEI ERSST.v4 sea surface temperature reconstruction with their July 2015 update.
IMPORTANT NOTE 2: NOAA recently updated their “Pause-Buster” ERSST.v4 sea surface temperature data to the “Pause-Buster2” ERSST.v5. See the post A Very Quick Introduction to NOAA’s New “Pause-Buster 2” Sea Surface Temperature Dataset ERSST.v5. The WattsUpWithThat cross post is here. [End important note 2.]
The UKMO also recently made adjustments to their HadCRUT4 product, but they are minor compared to the GISS and NCEI adjustments.
We’re using the UAH lower troposphere temperature anomalies Release 6.0 for this post as the paper that documents it has been accepted for publication. And for those who wish to whine about my portrayals of the changes to the UAH and to the GISS and NCEI products, see the post here.
The GISS LOTI surface temperature reconstruction and the two lower troposphere temperature composites are for the most recent month. The HADCRUT4 and NCEI products lag one month.
Much of the following text is boilerplate that has been updated for all products. The boilerplate is intended for those new to the presentation of global surface temperature anomalies.
Most of the graphs in the update start in 1979. That’s a commonly used start year for global temperature products because many of the satellite-based temperature composites start then.
We discussed why the three suppliers of surface temperature products use different base years for anomalies in chapter 1.25 – Many, But Not All, Climate Metrics Are Presented in Anomaly and in Absolute Forms of my free ebook On Global Warming and the Illusion of Control – Part 1 (25MB).
I’ve discontinued the model-data comparisons using 61-month filters. But I’m continuing to present the model-data 30-year trend comparison using the GISS Land-Ocean Temperature Index (LOTI) data.
We’ll start the updates with the surface temperature-based datasets.
GISS LAND OCEAN TEMPERATURE INDEX (LOTI)
Introduction: The GISS Land-Ocean Temperature Index (LOTI) reconstruction is a product of the Goddard Institute for Space Studies. Starting with the June 2015 update, GISS LOTI had been using the NOAA Extended Reconstructed Sea Surface Temperature version 4 (ERSST.v4) for ocean surface temperature data, the pause-buster reconstruction, which also infills grids without temperature samples. More recently, since August 2017, GISS LOTI has been using the NOAA Pause-Buster2 Extended Reconstructed Sea Surface Temperature version 5 (ERSST.v5). For land surfaces, GISS adjusts GHCN and other land surface temperature products via a number of methods and infills areas without temperature samples using 1200km smoothing. Refer to the GISS description here. Unlike the UK Met Office and NCEI products, GISS masks sea surface temperature data at the poles, anywhere seasonal sea ice has existed, and they extend land surface temperature data out over the oceans in those locations, regardless of whether or not sea surface temperature observations for the polar oceans are available that month. Refer to the discussions here and here. GISS uses the base years of 1951-1980 as the reference period for anomalies. The values for the GISS product are found here. (I archived the former version here at the WaybackMachine.)
Update: The September 2018 GISS global temperature anomaly is +0.75 deg C. According to the GISS LOTI data, global surface temperature anomalies made a small downtick since August, a -0.02 deg C decrease.
Figure 1 – GISS Land-Ocean Temperature Index
NCEI GLOBAL SURFACE TEMPERATURE ANOMALIES (LAGS ONE MONTH)
NOTE: The NCEI only produces the product with the manufactured-warming adjustments presented in the paper Karl et al. (2015). As far as I know, the former version of the reconstruction is no longer available online. For more information on those curious NOAA adjustments, see the posts:
- NOAA/NCDC’s new ‘pause-buster’ paper: a laughable attempt to create warming by adjusting past data
- More Curiosities about NOAA’s New “Pause Busting” Sea Surface Temperature Dataset
- Open Letter to Tom Karl of NOAA/NCEI Regarding “Hiatus Busting” Paper
- NOAA Releases New Pause-Buster Global Surface Temperature Data and Immediately Claims Record-High Temps for June 2015 – What a Surprise!
And:
- Pause Buster SST Data: Has NOAA Adjusted Away a Relationship between NMAT and SST that the Consensus of CMIP5 Climate Models Indicate Should Exist?
- The Oddities in NOAA’s New “Pause-Buster” Sea Surface Temperature Product – An Overview of Past Posts
- On the Monumental Differences in Warming Rates between Global Sea Surface Temperature Datasets during the NOAA-Picked Global-Warming Hiatus Period of 2000 to 2014
Introduction: The NOAA Global (Land and Ocean) Surface Temperature Anomaly reconstruction is the product of the National Centers for Environmental Information (NCEI), which was formerly known as the National Climatic Data Center (NCDC). NCEI merges their new “pause buster2” Extended Reconstructed Sea Surface Temperature version 5 (ERSST.v5) with the new Global Historical Climatology Network-Monthly (GHCN-M) version 3 for land surface air temperatures. The ERSST.v5 “pause buster2” sea surface temperature reconstruction infills grids without temperature samples in a given month. NCEI also infills land surface grids using statistical methods, but they do not infill over the polar oceans when sea ice exists. When sea ice exists, NCEI leave a polar ocean grid blank.
The source of the NCEI values is through their Global Surface Temperature Anomalies webpage. Click on the link to Anomalies and Index Data.)
Update (Lags One Month): The August 2018 NCEI global land plus sea surface temperature anomaly was +0.74 deg C. See Figure 2. It made a very minor downtick (a decrease of about -0.02 deg C) since July 2018.
Figure 2 – NCEI Global (Land and Ocean) Surface Temperature Anomalies
UK MET OFFICE HADCRUT4 (LAGS ONE MONTH)
Introduction: The UK Met Office HADCRUT4 reconstruction merges CRUTEM4 land-surface air temperature product and the HadSST3 sea-surface temperature (SST) reconstruction. CRUTEM4 is the product of the combined efforts of the Met Office Hadley Centre and the Climatic Research Unit at the University of East Anglia. And HadSST3 is a product of the Hadley Centre. Unlike the GISS and NCEI reconstructions, grids without temperature samples for a given month are not infilled in the HADCRUT4 product. That is, if a 5-deg latitude by 5-deg longitude grid does not have a temperature anomaly value in a given month, it is left blank. Blank grids are indirectly assigned the average values for their respective hemispheres before the hemispheric values are merged. The HADCRUT4 reconstruction is described in the Morice et al (2012) paper here. The CRUTEM4 product is described in Jones et al (2012) here. And the HadSST3 reconstruction is presented in the 2-part Kennedy et al (2012) paper here and here. The UKMO uses the base years of 1961-1990 for anomalies. The monthly values of the HADCRUT4 product can be found here.
Update (Lags One Month): The August 2018 HADCRUT4 global temperature anomaly is +0.59 deg C. See Figure 3. It basically remained the same since the prior month, with a teeny -0.01 deg C downtick from July to August 2018.
Figure 3 – HADCRUT4
UAH LOWER TROPOSPHERE TEMPERATURE ANOMALY COMPOSITE (UAH TLT)
Special sensors (microwave sounding units) aboard satellites have orbited the Earth since the late 1970s, allowing scientists to calculate the temperatures of the atmosphere at various heights above sea level (lower troposphere, mid troposphere, tropopause and lower stratosphere). The atmospheric temperature values are calculated from a series of satellites with overlapping operation periods, not from a single satellite. Because the atmospheric temperature products rely on numerous satellites, they are known as composites. The level nearest to the surface of the Earth is the lower troposphere. The lower troposphere temperature composite include the altitudes of zero to about 12,500 meters, but are most heavily weighted to the altitudes of less than 3000 meters. See the left-hand cell of the illustration here.
The monthly UAH lower troposphere temperature composite is the product of the Earth System Science Center of the University of Alabama in Huntsville (UAH). UAH provides the lower troposphere temperature anomalies broken down into numerous subsets. See the webpage here. The UAH lower troposphere temperature composite are supported by Christy et al. (2000) MSU Tropospheric Temperatures: Dataset Construction and Radiosonde Comparisons. Additionally, Dr. Roy Spencer of UAH presents at his blog the monthly UAH TLT anomaly updates a few days before the release at the UAH website. Those posts are also regularly cross posted at WattsUpWithThat. UAH uses the base years of 1981-2010 for anomalies. The UAH lower troposphere temperature product is for the latitudes of 85S to 85N, which represent more than 99% of the surface of the globe.
The UAH lower troposphere data are now at Release 6. See Dr. Roy Spencer’s post here. Those Release 6.0 enhancements lowered the warming rates of their lower troposphere temperature anomalies. See Dr. Spencer’s blog post Version 6.0 of the UAH Temperature Dataset Released: New LT Trend = +0.11 C/decade and my blog post New UAH Lower Troposphere Temperature Data Show No Global Warming for More Than 18 Years, both of which were published 3 years ago in 2015. The UAH lower troposphere anomaly data, Release 6.0, through September 2018 are here.
Update: The September 2018 UAH (Release 6.0) lower troposphere temperature anomaly is +0.14 deg C. It dropped slightly since August (a decrease of about -0.05 deg C).
Figure 4 – UAH Lower Troposphere Temperature (TLT) Anomaly Composite – Release 6.0
RSS LOWER TROPOSPHERE TEMPERATURE ANOMALY COMPOSITE (RSS TLT)
Like the UAH lower troposphere temperature product, Remote Sensing Systems (RSS) calculates lower troposphere temperature anomalies from microwave sounding units aboard a series of NOAA satellites. RSS describes their product at the Upper Air Temperature webpage. The RSS product is supported by Mears and Wentz (2009) Construction of the Remote Sensing Systems V3.2 Atmospheric Temperature Records from the MSU and AMSU Microwave Sounders. RSS also presents their lower troposphere temperature composite in various subsets. See the webpage here. Also see the RSS MSU & AMSU Time Series Trend Browse Tool.
Note: As discussed in the initial notes of this post, RSS also released their version 4 of their lower troposphere temperature anomaly data, the monthly values of which can be found here.
Update: The September 2018 RSS lower troposphere temperature anomaly is +0.49 deg C. It dropped slightly (a downtick of -0.02 deg C) since August 2018.
Figure 5 – RSS Lower Troposphere Temperature (TLT) Anomalies
COMPARISONS
The GISS, HADCRUT4 and NCEI global surface temperature anomalies and the RSS and UAH lower troposphere temperature anomalies are compared in the next three time-series graphs. Figure 6 compares the five global temperature anomaly products starting in 1979. Again, due to the timing of this post, the HADCRUT4 and NCEI updates lag the UAH, RSS, and GISS products by a month.
I’ve discontinued the comparisons starting in 1998 and 2001. As expected, the global temperature responses to the 2014/15/16 El Niño effectively ended what was known as the global warming hiatus and brought those extremely short-term trends more into line with the models.
Because the suppliers all use different base years for calculating anomalies, I’ve referenced them to a common 30-year period: 1981 to 2010. Referring to their discussion under FAQ 9 here, according to NOAA:
This period is used in order to comply with a recommended World Meteorological Organization (WMO) Policy, which suggests using the latest decade for the 30-year average.
Figure 6 – Comparison Starting in 1979
###########
Note also that Figure 6 lists the trend of the CMIP5 multi-model mean (historic through 2005 and RCP8.5 forcings afterwards), which are the climate models used by the IPCC for their 5th Assessment Report. The metric presented for the models is surface temperature, not lower troposphere.
AVERAGE
Figure 7 presents the average of the GISS, HADCRUT and NCEI land plus sea surface temperature anomaly reconstructions and the average of the RSS and UAH lower troposphere temperature composites. Because the HADCRUT4 and NCEI products lag one month in this update, I’ve only updated this graph through August.
Figure 7 – Average of Global Land+Sea Surface Temperature Anomaly Products
MODEL-DATA COMPARISON – 30-YEAR RUNNING TRENDS
Yet another way to show how poorly climate models simulate surface temperatures is to compare 30-year running trends of global surface temperature data and the model-mean of the climate model simulations of it. See Figure 8. In this case, we’re using the global GISS Land-Ocean Temperature Index for the data. For the models, once again we’re using the model-mean of the climate models stored in the CMIP5 archive with historic forcings to 2005 and worst case RCP8.5 forcings since then.
Figure 8
There are numerous things to note in the trend comparison. First, there is a growing divergence between models and data starting in the early 2000s. The continued rise in the model trends indicates global surface warming is supposed to be accelerating, but the data indicate little to no acceleration since then. Second, the plateau in the data warming rates begins in the early 1990s, indicating that there has been very little acceleration of global warming for more than 2 decades. This suggests that there MAY BE a maximum rate at which surface temperatures can warm. Third, note that the observed 30-year trend ending in the mid-1940s is comparable to the recent 30-year trends. (That, of course, is a function of the new NOAA ERSST.v5 data used by GISS.) Fourth, yet that high 30-year warming ending about 1945 occurred without being caused by the forcings that drive the climate models. That is, the climate models indicate that global surface temperatures should have warmed at about a third that fast if global surface temperatures were dictated by the forcings used to drive the models. In other words, if the models can’t explain the observed 30-year warming ending around 1945, then the warming must have occurred naturally. And that, in turns, generates the question: how much of the current warming occurred naturally? Fifth, the agreement between model and data trends for the 30-year periods ending in the 1960s to about 2000 suggests the models were tuned to that period or at least part of it. Sixth, going back further in time, the models can’t explain the cooling seen during the 30-year periods before the 1920s, which is why they fail to properly simulate the warming in the early 20th Century.
One last note, the monumental difference in modeled and observed warming rates at about 1945 confirms my earlier statement that the models can’t simulate the warming that occurred during the early warming period of the 20th Century.
MONTHLY SEA SURFACE TEMPERATURE UPDATE
I haven’t published a monthly sea surface temperature update since October 2016, with the most recent update found here. The satellite-enhanced sea surface temperature composite (Reynolds OI.2) used to be presented in global, hemispheric and ocean-basin bases. I may start updating them again regularly in the near future. Then again, I may not.
RECENT RECORD HIGHS
We discussed the recent record-high global sea surface temperatures for 2014 and 2015 and the reasons for them in General Discussions 2 and 3 of my most recent free ebook On Global Warming and the Illusion of Control (25MB). (And, of course, the record highs in 2016 are lagged responses to the 2014/15/16 El Niño.) The book was introduced in the post here (cross post at WattsUpWithThat is here).
STANDARD CLOSING REQUEST
Please purchase my recently published ebooks. As many of you know, this year I published 2 ebooks that are available through Amazon in Kindle format
- Dad, Why Are You A Global Warming Denier? (For an overview, the blog post that introduced it is here.)
- Dad, Is Climate Getting Worse in the United States? (See the blog post here for an overview.)
Regards
Bob
“This suggests that there MAY BE a maximum rate at which surface temperatures can warm.”,
Could it also relate to a constant pressure of the thumb on the scale, given that it coincides with the period of most intense adjustments to the data?
Any physical system will have characteristics like resistance, inductance, and capacitance. Mechanical systems have mass, friction, and spring constant. Thermal systems are similar. link
Depending on the system, some parameters can change instantly. The voltage across a resistor can change instantly. The voltage across a capacitor can not. As another example, the position of a mass can not change instantly because that would require infinite energy. We can’t discount the possibility that, somewhere in any system, a given parameter can change almost instantly.
The question is, given the available solar energy and the energy stored in the system and the characteristics of the system, how fast can the global temperature change?
Given the observed performance of the global climate system, there is almost certainly a maximum rate at which the global temperature can change. Even though local surface temperatures can change very fast, there is almost certainly a limit to how fast they can change. The record for day-night temperature change is 102°F. link That suggests that there is a theoretical limit which is not very much higher.
Saying that there MAY be a maximum rate at which surface temperatures can warm is just dumb. That limit almost certainly exists. The hypothesis that such a limit may not exist requires considerable proof.
AGW theory actually says that there is a defined rate at which the surface cools, set not only by the temperature but by the amount of CO2 in the intervening atmosphere. It seems to me that this should manifest itself in the temperature drop from daytime high to nighttime low on a daily basis. I see no evidence for that.
If you look at the average high and average low for a given month for a given location you get roughly the day/night temperature difference. Dry locations like Las Vegas will change by around 25°F. Humid locations like Hong Kong will change by around 8°F. That’s a 3:1 ratio.
Part of the reason for the difference is that humid air holds more heat. Part of the reason is that water vapor absorbs more heat. Anyway, it reinforces the concept that water is the most important greenhouse gas.
The alarmists will discount water vapor because it is a condensing gas. It’s true that water vapor is much more prevalent at lower altitudes. It’s also clear that water vapor has a disproportionate effect on surface temperature.
The only way Dr. James Hansen could get catastrophic warming was to invoke increased water vapor as a feedback. It’s the elephant in the room.
In Houston during the humid months, the air cools down until it reaches the dew point, usually near 77°F. Then it slowly cools for the rest of the night as moisture slowly condenses out of the atmosphere. Only when relatively dry air moves in do we see cooling continuing through the night. The overnight low can be predicted quite well by assuming the atmosphere will cool to the dew point and no lower.
Memories!
My introduction to engineering in my Freshman year (1960) was “Systems”.
We covered each combination of functions, in parallel and series, one at a time. We were then given relevant problems in mechanical, electrical, fluid and thermal scenarios for each combination, working out the answers mathematically (with slide rules). “Lab” time was used to recreate the problem on an analog computer; e.g. a spring and shock absorber system for an automobile.
(We were learning FORTRAN IV – digital modeling was out of the question.)
That was a great background for when we branched off into thermodynamics, fluid flow, electrical etc.
I’ve often wondered if “Systems” is still taught in the Engineering schools.
It was in 1965-1972, but that’s not much help since I’m evidently almost as old as you with my freshman year in 1965 E-a
School.
System analysis is still being taught, but mostly in EE and applied math classes. It’s not being taught in any substantive way in all the new “environmental” and “earth science” courses that have become au courant on present-day campuses. It’s the latter soft-science and geography departments, rather than the hard disciplines of physics and math, that breed most of the trumpeters of AGW dogma.
Commie, to be fair, I took it that Bob was suggesting that what we see here MAY BE the limit (0.15C/decade) and was just a littke careless with the verbiage. The rates in 1945 and 2000s appear to be bumping into this ceiling. This is a possibly profound thought that I’d hate to see lost in the semantics. I’m sure Bob didnt think the climate could earm by 100 degrees per decade before his insight.
I “repaired” this more constrained insight because it could be a gamechanger. I have suspected something similar since the discovery that models used 2 decades ago proved to have beeen 300% too hot in their projections when overtaken by observations. Even the MWP looks like another bump into this ceiling.
Moreover, CAGW proponents overhauled their “projections’ massively in response to the overly-hot finding. They pushed their starting point back to 1850 from 1950 so they could put ~1C in the bank, and they trimmed the iconic 2C danger limit back to 1.5C and hyped the end of the world that going more than 0.5C above the present would bring (no longer tossing around the ridiculous 3-8C above 1950). It is clear that the few actually smart practitioners on the “dark side” also had the idea that data showed that it was possible that we could keep burning fossil fuels with abandon and might not even achieve more than 1.5C.
Thank you commie Bob for triggering consolidation of my thoughts on this and thank you Bob Tisdale. Perhaps BobT you will kindly term this the Tisdale-commie Bob-Pearse Climate Limit if it begins to consolidate as we go forward!
Heh, we had a 52+ °F temperature drop in less than 24 hours here in SE Virginia in January 2014 (may not count as a diurnal drop as it did not occur during the same day).
“Any physical system will have characteristics like resistance, inductance, and capacitance.”
I think what you were meaning is impedance Bob, and perhaps an equivalent to Lenz’s law. Just in a different physical system.
Le Chatelier’s principle perhaps.
When you’re learning to analyze a system, it is conventional to use a second order differential equation to describe a mechanical system consisting of a mass, a spring, and a damper. The electrical equivalent described by the same equation is resistance, capacitance, and inductance. You’re usually analyzing the system’s response to a step impulse.
When you’re analyzing a circuit handling a signal, it’s conventional to use impedance.
Any such rapid cooling events are accomplished by an air mass of one temperature moving aside for an air mass of another temperature, which is also non-instantaneous since air does have mass. But I contend that the possible rate of warming for any discreet air mass, if you could follow it and measure its temperature as it moves, would be much slower than that.
That isn’t a thumb it is Mann with heavy boots a coat and all his pockets full of rocks
Lower Troposphere Temperature: It would be interesting to see raw data from weather balloons.
Sure, the yellow graph in this chart is the raw dataset behind the RICH and RAOBCORE adjusted datasets.
https://drive.google.com/open?id=1GkOiRwT2qbdTdR6Z72LbgcjKQ6vK4F13
RICH increases the trend slightly (1970-2017) but RAOBCORE decreases the trend slightly. RATPAC is also included, fewer stations but better global distribution.
These are the only maintained datasets. HadAT was discontinued in 2012, IUK is only updated now and then, latest update through 2015.
Whats the % of the >70 % Ocean surface area of the globe that is covered by recording stations again? What is the length of time record that they formulate these anomaly’s against ?
If I shart my bed in the middle of the night and no-one smells or hears it….did it really happen ?
One thing I’ve noticed is that 12-month running means of GISS versus NCEI can vary quite a bit and jump around a lot. E.g. the 12-month running means for the 12-monthperiod ending in September 2015 show GISS 0.052 C lower than NCEI data for the same period. But since April 2017 to Sep 2018, the 12-month running means have shown GISS between 0.05 and 0.06 higher than NCEI (12-month period ending April 2017, 12-month period ending May 2017, 12-month period ending June 2017, etc..) Why is the US government putting out 2 different numbers in the first place?
Bob have you looked at HAD Crut 4 data since Dr Jones’s BBC Q&A in 2010? Any comments?
I’ve checked the HAD Crut 4 temp trends before, but it seems to have changed a lot since Phil Jones’s Q&A with the BBC in 2010, after the Climategate scandal.
In 2010 he listed 4 warming trends since 1850 and there wasn’t much difference in the trends. The 4 trends were—-
1860 to 1880- 0.163c dec
1910 to 1940- 0.150 dec
1975 to 1998- 0.166 dec
1975 to 2009- 0.161 dec.
Today the York Uni tool has the SAME 4 trends for Had 4 Crut at—–
1860 to 1880- 0.156c dec lower
1910 to 1940- 0.137 dec lower
1975 to 1998- 0.191 dec much higher
1975 to 2009- 0.193 dec. much higher
So just 8 years after Jones’s BBC Q&A we see both earlier warming trends have been adjusted down and the two later trends have been adjusted up. And people wonder why we don’t trust these temp data-sets?
And this is the temp data-set that the IPCC uses to try and convince us to waste endless billions $ for zero gain. Who are they trying to fool? And why hasn’t one of their top scientists noticed this and blown the whistle?
Here’s Dr Jones’s 2010 BBC Q&A. See question A.
http://news.bbc.co.uk/2/hi/science/nature/8511670.stm
Here’s the YORK UNI tool using HAD 4 Crut krig global.
http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html
All of the warming has been due to natural factors .
. . . except possibly for some UHI effect and overzealous data reconstruction.
Lovely. 2019 we’ll see a turn to a cooling world, right?
Bob said “And that, in turns, generates the question: how much of the current warming occurred naturally?’
I’ll play some devil’s advocate: how much of the current warming is hidden by natual cooling? How’d you know? Gavin S. said the natural component is maybe -10%. I don’t know, but I’d be careful here.
I have seen a paper that contends atmospheric CO2 is only ~6% from manmade sources, and the change in atmospheric CO2 levels is only ~20% due to Man’s activities. Furthermore, recent calculations support a low ECS, more like 0.4 deg C/2xCO2, but given actual atmospheric temperature changes relative to CO2 changes don’t even correspond, I believe it’s lower than that, much lower, and might even be negative. So in the end, I believe we could reasonably conclude Mankind’s contributions to a changing climate could be 2% or less.
The US PBS TV broadcast has a special on eugenics. Eugenics wanted to prove that “bad” humans were caused by genetics and for a time this was the consensus theory. They did this by ignoring environmental factors but when the Great Depression came people looked around and saw all walks of life in bread lines, and not because of genes.
Consensus climate science operates in much the same way. It deliberately suppresses natural variation and minimizes it, as was done with the erasure of the medieval warm period, in order to “prove” that CO2 is the determining factor in climate.
Consensus climate science is like eugenics in another way: both were/are the consensus theory of the time, widely embraced by the scientific community. If consensus equals legitimate science– as the likes of Oreskes would have us believe– then eugenics would have been a legitimate science.
The more I think about the problem with climate science the more I think it’s a problem not of science, but of ethics. Good scientists don’t bully the data to prove that their theory is the only one worth listening to. The eugenicists did that, too.
Don132 ( I sign so as not to be accused of using two names– an accident of how I signed up to comment in the new system.)
don’t forget you’re looking at adjusted temps to show the past cooler..and the present warmer
As a non-scientist, I find Bob Tisdale’s explanations illuminating and always written so that anyone can understand them. Thank you!
Figure 8 says it all.
Don132
Don says, “As a non-scientist, I find Bob Tisdale’s explanations illuminating and always written so that anyone can understand them.”
Thank you, Don132.
Cheers
Bob
Re: “Figure 8 says it all.”
The plotted blue line of actual temperatures looks ~OK, but the red line (climate model hindcasting) is highly questionable.
The way the models hindcast the strong global cooling from ~1945 to ~1975, even as fossil fuel combustion and atmospheric CO2 concentrations strongly increased, was to fabricate a body of false aerosol data THAT NEVER ACTUALLY EXISTED.
Do you want a model that draws an elephant and wiggles his trunk? That can be done too.
According to Figure 6 all the 5 data sets seem to coincide pretty well taking into consideration that there must be a certain amount of uncertainties in each set. So, in 40 years there has been about 0.6C warming. At the same time atmospheric CO2 has increased 75 ppm meaning that an increase of 100 ppm leads to 0.8 C warming assuming that all warming is due to CO2. The rate of the CO2 increase has been pretty steady, 2ppm/year. So, after 50 years in 2070 it should be 0.8C warmer than today. From the end of 19th century the mean global temperature has increased perhaps 0.8-0.9C leading to 1.7C increase in 2070. IPCC claims that 0.5C warming between 1.5 and 2.0C leads to catastrophic consequences. Do we see anything such provided that there was already 0.6C warming in 40 years without catastrophes?
Land Surface Air Temperature Data Are Considerably Different Among BEST‐LAND, CRU‐TEM4v, NASA‐GISS, and NOAA‐NCEI
https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018JD028355
It is not a linear relationship – it is logarithmic.
still comparing models and observations wrong,
jeez Bob
Pot calling the kettle black with your unicorns and wiggly line.
Mosher, Figure 8…

…presents exactly what I want presented. Nothing more, nothing less.
Good-bye
He wants you to krigg, harmonize and torture it into submission Bob, you are correct it is no better or worse than the unicorn chaser does.
it’s always 5 o’clock somewhere
Why is comparing models and observations wrong?
Climate science is the only science that specializes in predicting far into the future and is at the same time supposed to be irrefutable. Why should we believe future forecasts if past forecasts have been wrong?
Don132
still comparing models and observations wrong,
jeez Bob
should be using raw unadjusted temp data, right?
They differ. Therefore the models are wrong.
Are you going to try and tell us that the models and data trends match?
Figure # 8
Models (still) aren’t reality
Bob, good update. You can draw a straight trendline through the data, like in your fig. 6., but I see an uptick in the data in 1998. That is, the trendlines have about the same increase but the threshold rises in 1998. This is the year of a very strong El Nino, and I wonder how can an El Nino change the world-wide temperature? Something else going on, either real or artificial?
Have studies been done to show the “greenhouse effect” of CO2 at different concentrations? I have read, (as a non-scientist), that there is a saturation point (if that is the correct term) at which CO2’s absorbtion rate drops off significantly, sharply reducing its “greenhouse” capabilities.
The continuing warming impact of CO2 increases arises, in theory, from the “wings” of the spectral lines. The central portion of the main absorption/emission band is indeed saturated, for example downward radiation at the surface at these frequencies comes essentially from the near surface atmosphere, at the level of a blackbody, and cannot increase any further, but its them wings where the action is.
I know there is a lot of science effort that goes into all of this but it still comes across like stock market chart watching divorced of underlying, nonstationary factors. There MAY BE a maximum rate of warming because the combinations of warming factors cannot be sustained in unison for their respective cyclical overlays and nonstationary conditions.
When analyzing complex systems with multiple interacting variables it is useful to note the advice of Enrico Fermi who reportedly said “never make something more accurate than absolutely necessary”. My recent paper presented a simple heuristic approach to climate science which plausibly proposed that a Millennial Turning Point (MTP) and peak in solar activity was reached in 1991,that this turning point correlates with a temperature turning point in 2003/4, and that a general cooling trend will now follow until approximately 2650.
Page, 2017 in ” The coming cooling: usefully accurate climate forecasting for policy makers.” said:” This paper argued that the methods used by the establishment climate science community are not fit for purpose and that a new forecasting paradigm should be adopted.”
The establishment’s dangerous global warming meme,the associated IPCC series of reports , the entire UNFCCC circus, and the recent hysterical IPCC SR1.5 proposals are founded on two basic errors in scientific judgement. First – the sample size is too small. Most IPCC model studies retrofit from the present back for 100 – 150 years when the currently most important climate controlling, largest amplitude solar activity cycles are millennial. This means that all climate model temperature outcomes are likely too hot and fall outside of the real future world. (See Kahneman -. Thinking Fast and Slow p118) .Second – the models make the fundamental scientific error of forecasting straight ahead beyond the Millennial Turning Point (MTP) and peak in solar activity which was reached in 1991.This turning point correlates with a temperature turning point in 2003/4. A general cooling trend will shortly follow until approximately 2650.
Because of the thermal inertia of the oceans there is a varying lag between the solar activity MTP and the varying climate metrics.The temperature peak is about 2003/4 – lag is about 12 years.The arctic sea ice volume minimum was in 2012 +/- lag = 21 years. Possible sea level Millennial Turning Point – Oct 2015 lag = 24 years +/- (see https://climate.nasa.gov/vital-signs/sea-level/ ) Since Oct 2015 sea level has risen at a rate of only 8.3 cms/century.It will likely begin to fall within the next 4 or 5 years. For the detail see data,discussion,and forecasts see Figs 3,4,5,10,11,and 12 in the abstract and links below..
The coming cooling: usefully accurate climate forecasting for policy makers.
See the Energy and Environment paper at http://journals.sagepub.com/doi/full/10.1177/0958305X16686488
and an earlier accessible blog version at
http://climatesense-norpag.blogspot.com/2017/02/the-coming-cooling-usefully-accurate_17.html Here is the abstract for convenience :
Here is the abstract for convenience :
“ABSTRACT
This paper argues that the methods used by the establishment climate science community are not fit for purpose and that a new forecasting paradigm should be adopted. Earth’s climate is the result of resonances and beats between various quasi-cyclic processes of varying wavelengths.It is not possible to forecast the future unless we have a good understanding of where the earth is in time in relation to the current phases of those different interacting natural quasi periodicities.Evidence is presented specifying the timing and amplitude of the natural 60+/- year and, more importantly, 1,000 year periodicities (observed emergent behaviors) that are so obvious in the temperature record. Data related to the solar climate driver is discussed and the solar cycle 22 low in the neutron count (high solar activity) in 1991 is identified as a solar activity millennial peak and correlated with the millennial peak -inversion point – in the RSS temperature trend in about 2003. The cyclic trends are projected forward and predict a probable general temperature decline in the coming decades and centuries. Estimates of the timing and amplitude of the coming cooling are made. If the real climate outcomes follow a trend which approaches the near term forecasts of this working hypothesis, the divergence between the IPCC forecasts and those projected by this paper will be so large by 2021 as to make the current, supposedly actionable, level of confidence in the IPCC forecasts untenable.”
See also the discussion with Professor William Happer at
http://climatesense-norpag.blogspot.com/2018/02/exchange-with-professor-happer-princeton.html
If it’s emergent behaviour, i.e. spontaneous nonlinear pattern, then it’s unlikely to have a fixed monotonic frequency.
Indeed! It’s difficult to find any discussion of the climate system that recognizes the keen difference between strictly periodic behavior (as with the astronomical tides) and the irregular oscillatory behavior of various quasi-periodic components and spectral bandwidths found in climate records. Finding any genuine understanding of nonlinear dynamic systems is even more difficult.
1sky1 The millennial and 60 year cycles are trivially obvious in the links above Note that
Because of the thermal inertia of the oceans there is a varying lag between the solar activity MTP and the varying climate metrics. The temperature peak is about 2003/4 – lag is about 12 years. The arctic sea ice volume minimum was in 2012 +/- lag = 21 years. Possible sea level Millennial Turning Point – Oct 2015 lag = 24 years +/- (see https://climate.nasa.gov/vital-signs/sea-level/ ) Since Oct 2015 sea level has risen at a rate of only 8.3 cms/century. It will likely begin to fall within the next 4 or 5 years. For the detail see data, and discussion, and especially Figs 3,4,5,8.10,11,and 12 in the links .
It depends what you mean by understanding .The core competency in the Geological Sciences is the ability to recognize and correlate the changing patterns of events in time and space. This requires a set of skills different from the reductionist and mathematical/statistical approach to nature, but which is essential for investigating past climates and forecasting future climate trends. It is necessary to build a record of the patterns and a narrative of general trends from an integrated overview of the actual individual local and regional time series of particular variables. Earth’s climate is the result of resonances and beats between various quasi-cyclic processes of varying wavelengths. It is not possible to forecast the future unless we have a good empirical understanding of where the earth is in time in relation to the current phases of those different interacting natural quasi periodicities which include the principal components of the observed emergent phenomena. The recognition of the principal quasiperiodic patterns provides sufficient “understanding ” to make usable predictions – which is the object of the exercise.
Changing patterns of events in time and space comprise one of the core concerns of geophysics. They cannot be recognized in any rigorous way without the analytical skills of system and signal analysis. That’s how the mathematical physics of the climate system is rationally tied to the empirical data. The claim that geological sciences can perform this task without the “reductionist and mathematical/statistical approach” is tantamount to the admission of analytic ineptitude, which is manifest in the misguided notion that chaotic climate is predictable via periodic components. All of the rigorous methods of prediction (Kalman and/or Wiener filters) applied to the most reliable empirical data contradict that grossly optimistic claim.
El Niño is moving away.

So correct Ren. El Nino potential is rapidly falling apart of late.
El Nino is moving away.
Joe B says Modoki Nino coming.
The trades have just strengthened.
This in response to the renewed Peruvian upwelling.
So much for el Nino.
ren,
are you sure?
Bob, I have left this comment many times. I think any chart, like your figure 8, that shows a model’s result should show a heavy vertical line marking the year the model is run. This allows a reader to quickly identify how much of the model’s result is hindcasting and may have been tuned by known data.
“Third, note that the observed 30-year trend ending in the mid-1940s is comparable to the recent 30-year trends.”
“In other words, if the models can’t explain the observed 30-year warming ending around 1945, then the warming must have occurred naturally. ”
Yes, no doubt NV did play a large part in the early 20th cent warming (which the IPCC acknowledges).
In 1940 CO2 forcing was ~0.5 W/m^2.
Now it is ~2.0 W/m^2 and including other anthro forcings it is getting towards 3 Wm^2.
The warming experienced until the mid 40’s coincided with a warm PDO/ENSO and ended when it turned cold in the early 40’s.
http://2.bp.blogspot.com/-Fkg790Q3b8o/VMRGN17t2oI/AAAAAAAAHwo/GTCVnmku248/s1600/GISTempPDO.gif
There was also a notable warm AMO phase peaking around 1940…
Also solar output was increasing….
I have looked but can find no reference as to whether GCM’s hindcast PDO/ENSO (link please if so), as in SSTs programmed in and the physics carried out by same on the atmosphere, or parametrized. It reckon it would take the complexity of an NWP weather model (computationally prohibitive). The PDO/ENSO state certainly cannot be programmed for a projection as it is unknowable.
Don132:
“Why should we believe future forecasts if past forecasts have been wrong?”
Because of the above …. we don’t have the computational power to replicate the complexity of the climate system on decadal scales. Something akin to an NWP model would be required which can only go a few days into the future before “chaos” takes over and makes them meaningless (chaos as in not knowing the initial state perfectly and the inherent instability in that stating state that that lack of precision lends to the outcome).
GCM’s aren’t NWP weather models, nor can they expected to be. On decadal scales natural variation will compound to give periods such as 1910-45 and 1998-2015 (prolonged -ve PDO/ENSO).
They are an ongoing means of learning dependent on computational power and new found knowledge of the physics of ocean heat movement/exchange. At present only ensemble products of multiple runs can be done with consequent averaging out of “wiggle” in warming.
Meanwhile the forcing contributed by GHGs continues to bring global temperatures rises out of the “noise” of NV.
Anthony Banton said
“Meanwhile the forcing contributed by GHGs continues to bring global temperatures rises out of the “noise” of NV.”
In the rest of his post he admitted that models were useless for predicting anything(“we don’t have the computational power to replicate the complexity of the climate system on decadal scales”).
So how can he claim that the models can calculate the AGW out of the total temperature increase? If not the models? then on what other calculations is he basing his claim on? All recent papers show an extremely small sensitivity to doubling of CO2. So how is Mr Banton able to give even an estimate of “forcing contributed by GHGs”.
Mr Banton’s claims are tantamount to saying that “I believe in God therefore he/she/it exists.” Or in his vernacular, I believe in man made global warming, therefore it exists.”
Alan Tomalty:
“In the rest of his post he admitted that models were useless for predicting anything(“we don’t have the computational power to replicate the complexity of the climate system on decadal scales”).
No, I didn’t say that my friend, because it’s not true.
I said that a GCM cannot forecast or project natural variation.
And cannot be expected to.
They do a different job.
In order for hindcasts to recreate internal variability – that is NOT direct forcing concerning absorbed energy either direct via SW (Solar, albedo, aerosol) or by GHGs, then we will need massively more computational power at our disposal than we do now.
Projections of AGW made with GCMs do not need to do that as internal variability is not a forcing in the long term. It cannot be, as it is just the movement of energy already within the climate system and NOT and not the change in balance via solar absorbed = LWIR emitted.
What is of far more import for any projection is the RCP that we follow.
We do not what that will be.
“So how can he claim that the models can calculate the AGW out of the total temperature increase? If not the models? then on what other calculations is he basing his claim on? All recent papers show an extremely small sensitivity to doubling of CO2. So how is Mr Banton able to give even an estimate of “forcing contributed by GHGs”.
Oh, do read properly please!
I said they are a tool with which to learn and measure against. And that the mere act of averaging projection runs in an ensemble erases NV.
GCMs end up with a ball-park figure within confidnce limits.
Ever notice those?
They are NOT meant to hit smack on the mean.
They cannot do whilst we still have the large noise of NV overlying the monotomic and increasing GHG forcing
GCMs include the runs on the upper/lower side on the emsemble mean.
Which cannot be ruled out.
Hence the 95% limits … which was kept to during the warming slow down.
It is just the Naysayer community that thinks the GCM’s are the entirety of the science.
No, It”s the physics that goes into them.
In 1896 Arrhenius calculated a warming of 5 to 6C for a doubling of CO2.
That was mankind’s first effort using the knowledge of 120 years ago.
The IPCC still puts it in the range 1.5 to 4.5C.
Anthony,
As per my Nature Geoscience link above, the exact age and duration of the PETM is uncertain, so the rate of carbon release can’t be known with any high degree of precision. But comparable to today is more likely than now being ten times as fast.
The upper half, at least, of IPCC’s ECS estimate is not even science fiction, but fantasy. More likely the upper two thirds. ECS probably ranges from 1.0 to 2.0 degrees C per doubling of CO2, with a central value of 1.5 degrees still possibly too high.
On a homeostatic water planet, the lab value of 1.1 to 1.2 degrees C might well enjoy net negative feedbacks, rather than positive.
Anthony Banton,
First let’s note that the catastrophic warming predicted by CO2 forcing is wholly dependent on increased water vapor in the atmosphere, which has not occurred. This is telling us that the very modest warming allegedly caused so far by CO2 isn’t having the predicted effect.
2W/m2 is essentially noise in the system, as would be 3 W/m2. This point has been made by Lindzen: https://merionwest.com/2017/04/25/richard-lindzen-thoughts-on-the-public-discourse-over-climate-change/ This degree of energy variation happens all the time– plus or minus 2,3,4 W/m2 here and there through any number of natural processes. As is well-known, it’s all about feedbacks as much as it’s about forcings, yet the alarmist camp assumes that feedbacks must be positive when there’s no evidence that this is the case.
As you say, the models aren’t supposed to predict natural variability but only consider forcing:
“Projections of AGW made with GCMs do not need to do that as internal variability is not a forcing in the long term. It cannot be, as it is just the movement of energy already within the climate system and NOT and not the change in balance via solar absorbed = LWIR emitted.”
I don’t think the assumption that CO2 forcings are any different from natural forcings is valid. It assumes that there is some fine balance that an additional 2 W/m2 will upset and that THAT 2 W/m2 is somehow more significant than any other 2W/m2, and that the feedback to THAT 2W/m2 must be positive.
In any case, “global warming” is a nice theory. It makes sense. It would seem that adding a forcing of 2W/m2 might warm the surface, and in fact we can make all sorts of calculations to show how this might work. However, we are at present seeing no evidence that the theory is actually true, and that predictions made by the theory– such as increased water vapor, unusual heating high in the tropical troposphere, ice mass loss in Antarctica and Greenland, and the much-feared rise of sea level, which at present is a measly average of 3mm/year and isn’t unusual in a geologic perspective– are occurring. Please note that Greenland, Antarctica, and the Arctic freeze completely in their winters, as they have for some time now; to say that they’re “melting” is a little extreme and in any case one would expect ice to melt in summer, sometimes even dramatically, and even at the poles. We can look around and say that the feedbacks to those extra W/m2 are neutral or very slightly positive or maybe even negative, but not significantly positive, and the theory was a good theory but should be revised in accordance with observations.
But no. As has been happening ever since the inception of the modern version of this theory, too many scientists are hell-bent on defending the theory and are keen to dismiss, discard, alter, or destroy any evidence that contradicts what they know MUST be true.
I live in a very leftest part of the country but I tell you people don’t give a crap anymore. Sure we have true believers but most people, I think, are getting a little tired of the doomsday that’s always around the corner, and of this amazing science that predicts far into the future and that, despite this, we’re told is “irrefutable.”
Don132
Bob T, You may be onto something profound with your comment on a 0.15C limit to global warming rate. See my comment:
https://wattsupwiththat.com/2018/10/18/september-2018-global-surface-landocean-and-lower-troposphere-temperature-anomaly-update/#comment-2495849
0.15C/ decade warming rate limit, that is.
Thanks Bob.
https://www.theguardian.com/environment/datablog/2017/jan/19/carbon-countdown-clock-how-much-of-the-worlds-carbon-budget-have-we-spent
Everyday, I love looking at the Guardian countdown clock of world CO2 emissions. A steady 1000 tons per second. Go China Go. The world needs more CO2 in the atmosphere NOT less. The clock says mankind has now reached 75% of CO2 emissions (total carbon budget is 2.9 trillion tons to limit warming to 2C above 1850 level) before Armageddon. When Armageddon comes in 18 years and 78 days I will throw a big party. Notice that the IPCC says 12 years. We know that the alarmists won’t debate skeptics but the alarmists won’t even debate other alarmists as to their differences. Interesting.
Speaking of The Guardian Alan, was wondering if you or anyone else has read George Monbiot’s new opinion piece in the Guardian?
https://www.theguardian.com/commentisfree/2018/oct/18/governments-no-longer-trusted-climate-change-citizens-revolt.
He writes:
“…On 31 October, I will speak at the launch of Extinction Rebellion in Parliament Square. This is a movement devoted to disruptive, nonviolent disobedience in protest against ecological collapse…”.
He is sounding like an increasing bitter and angry person. Seems to be losing his grip on calm, rational thought from my perspective. Over the top and off of a cliff.
Excellent stuff as usual Bob. Thanks!
Average global temperature is about what it was in 2002.
Atmospheric CO2 since 2002 has increased by 40% of the increase 1800 to 2017. CO2 (or any other ghg which does not condense in the atmosphere) apparently has little if any effect on temperature.
The average of the models does not really mean anything. The fact that there is a plethora of models is evidence that a lot of guess work has been done. Instead of averaging over all the wrong models they need to throw out the worst models and go with the model that seems to provide the best results. I think that it is really a matter of politics as to why the IPCC still makes use of a plethora of models most of which have to be wrong and even all of them may be wrong. The IPCC does not want to admit that CO2 does not have a significant effect on climate for fear of losing their funding.
Pretty clear to me: the best model are the one(s) with the most funding.
Has absolutely nothing to do with accurately tracking/predicting anything.
The CSIRO model seems to be better than the average. This is 10 runs for RCP8.5:
http://climexp.knmi.nl/data/icmip5_tas_Amon_CSIRO-Mk3-6-0_rcp85_0-360E_-90-90N_n_+++_1975:2020_a.png
The data run was for CMIP5 circa 2011. It appears to do better with modelling El Nino than other models. It captured the peak around 2016 and subsequent drop off. The average of models just rises steadily in prediction.
“It appears to do better with modelling El Nino than other models. It captured the peak around 2016 and subsequent drop off. The average of models just rises steadily in prediction.”
CSIRO does not do better with modelling El Nino. Most models handle it as well as could be done. It happens with about the right magnitude, form and frequency, but the timing is not locked to that on Earth. We can’t predict the timing of El Nino events on Earth, and it is the same in models. So getting the 2016 event right is just a coincidence. GCMs are used as climate models, not weather models. And since the models are not synchronised, the average cannot possibly reflect ENSO events.
CSIRO specifically aim to capture ENSO accurately:
Yes, they do. But you’ll notice that the statistics they give are of standard deviation and magnitude. Not of timing. They note an interesting feature in 2006 in their simulation. They don’t claim that it happened in reality (it didn’t).
If GCMls could predict the Nino of 2016, they would be using them to predict 2019. But they don’t.
Bob, please stop with this “pause-buster” whine..
All old SST datasets that ignore the shift in ship/buoy measurements ratio, are by mathematical necessity flawed, among them ERSST3, OISSTv2 one degree, and COBE SST.
ERSST4 and 5 have been tested and turned out to be better than HadSST3 post-WW2
http://advances.sciencemag.org/content/3/1/e1601207
https://rmets.onlinelibrary.wiley.com/doi/full/10.1002/qj.3235
ERSST5 is an improvement from v4 since it includes and agrees with Argo. Another plus is that the temperatures now are adjusted down to buoy standard, giving more realistic representation of absolute SST.
It is spooky how accurate the predictions made in the 1980s have come to pass. Warming goes up and down with the weather, but the climate is gradually changing as all the data sets clearly show.
The data does not falsify the claim that the earth is warming. However, if put into kelvin, the variance is < 1%. The question becomes: when will there be any cooling? Was "the pause" a cooling portion of the cycle overridden by the warming trend?
The answer is space based lenses that can cool or warm the earth as needed. This will become feasible in a few decades given current rates of growth of the economy / space industry.