CMIP6 Climate Models Producing 50% More Surface Warming than Observations since 1979

Reposted from Dr. Roy Spencer’s Blog

June 25th, 2020 by Roy W. Spencer, Ph. D.

Those who defend climate model predictions often produce plots of observed surface temperature compared to the models which show very good agreement. Setting aside the debate over the continuing adjustments to the surface temperature record which produce ever-increasing warming trends, let’s look at how the most recent (CMIP6) models are doing compared to the latest version of the observations (however good those are).

First, I’d like to explain how some authors get such good agreement between the models and observations. Here are the two “techniques” they use that most annoy me.

  1. They look at long periods of time, say the last 100+ years. This improves the apparent agreement because most of that period was before there was substantial forcing of the climate system by increasing CO2.
  2. They plot anomalies about a common reference period, but do not show trend lines. Or, if they show trends lines, they do not start them at the same point at the beginning of the record. When you do this, the discrepancy between models and observations is split in half, with the discrepancy in the latter half of the record having the opposite sign of the discrepancy in the early part of the record. They say, “See? The observed temperatures in the last few decades nearly match the models!”

In the following plot (which will be included in a report I am doing for the Global Warming Policy Foundation) I avoid both of those problems. During the period of strongest greenhouse gas forcing (since 1979), the latest CMIP6 models reveal 50% more net surface warming from 1979 up to April 2020 (+1.08 deg. C) than do the observations (+0.72 deg. C).

Note I have accounted for the trends being somewhat nonlinear, using a 2nd order polynomial fit to all three time series. Next, I have adjusted the CMIP time series vertically so that their polynomial fit lines are coaligned with the observations in 1979. I believe this is the most honest and meaningful way to intercompare the warming trends in different datasets.

As others have noted, it appears the CMIP6 models are producing even more warming than the CMIP5 models did… although the KNMI Climate Explorer website (from which all of the data was downloaded) has only 13 models archived so far.

0 0 vote
Article Rating
74 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Rod Smith
June 27, 2020 2:23 pm

If they can’t even correctly backdate their models, how are we supposed to trust their predictive power? Blind faith?

n.n
Reply to  Rod Smith
June 27, 2020 2:46 pm

Have faith in mortal gods, goddesses, their religious virtues, and secular models infilled with brown matter.

Richard (the cynical one)
Reply to  n.n
June 27, 2020 3:07 pm

If the models are better at generating the desired answers than reality is, then by all means, let’s go with the models.

sycomputing
Reply to  Rod Smith
June 27, 2020 3:04 pm

If they can’t even correctly backdate their models, how are we supposed to trust their predictive power?

It doesn’t seem the IPCC expects you to do so (emphasis added):

In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation
of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential.”

https://www.ipcc.ch/site/assets/uploads/2018/03/TAR-14.pdf

Section 14.2.2.2, p. 774

Jim Gorman
Reply to  sycomputing
June 27, 2020 4:41 pm

“This reduces climate change to the discernment of significant differences in the statistics of such ensembles. ”

This is so much bull pucky. Significant differences in models that are wrong prove nothing!

sycomputing
Reply to  Jim Gorman
June 27, 2020 6:51 pm

This is so much bull pucky. Significant differences in models that are wrong prove nothing!

Yeah kinda backs up this: “In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

What say you?

jorgekafkazar
Reply to  sycomputing
June 27, 2020 8:40 pm

“The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation
of ensembles of model solutions.”

This is mistaking the map for the territory. About as useful as wiring farts to cardboard.

sycomputing
Reply to  jorgekafkazar
June 27, 2020 9:06 pm

This is mistaking the map for the territory. About as useful as wiring farts to cardboard.

Brilliant observation jorgekafkazar.

So you mean it reinforces this:

“In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

What say you?

Adam
Reply to  Rod Smith
June 27, 2020 4:47 pm

More like CHIMP6 models.

Dave Fair
June 27, 2020 2:29 pm

Does anybody know what the CMIP6 modelers think happened around their 2005 step decline? A little wagging of the elephant’s trunk?

n.n
Reply to  Dave Fair
June 27, 2020 2:49 pm

More like pin the tail on the donkey. Spin, spin, spin, and go.

Michael Jankowski
June 27, 2020 2:32 pm

“…Those who defend climate model predictions often produce plots of observed surface temperature compared to the models which show very good agreement…”

And only usually the global anomaly. Being wrong region-after-region but having an overall composite that is “very good agreement” with the global anomaly is garbage. And they get that “very good agreement” with things like clouds, precipitation, etc., which are not in “very good agreement.”

Robert of Ottawa
June 27, 2020 2:36 pm

And eveb the Hadcrut are bogus, tweaked and adjusted.

Jeroen
June 27, 2020 2:40 pm

This is with the Urban Heat Island effect. They are probally producing 100% more warming.

mikewaite
June 27, 2020 2:44 pm

Neither CMIP5 nor 6 allow for the El Ninos that have dominated as abrupt deviations from 1999 to 2019. Why so? They model the volcanic aerosol effect quite accurately. If you eliminated the El Ninos from the observational record the rate of warming since 2002 would be about 0.1C per 18 years . But , you cry , the more frequent El Ninos are proof of a climate crisis via AGW . If so , why do they not appear in the CMIP models?
Now I am just a mere chemistry lab rat, know nothing of R or Python or Principal Component Analysis, etc , but even I can see that the proposed trend line through the observational data obscures the indication that there are 3 different regions from 1979 to present and only since 2014 could the rate of increase of global temperature match that of the trend line for CMIP6 , but there has been no pronounced sudden change in the rate of CO2 increase in the period 2014 -2020, and ironically that is the period when the US and Eu started to reduce (slightly) their CO2 missions.

Mike McHenry
Reply to  mikewaite
June 27, 2020 3:06 pm

Fellow chemist. Lets not forget thermodynamics. Water at the same temperature as air and equal volumes has 3000+ times more heat than air. So air can’t change sea surface temperatures so it can’t effect El Nino or anything else associated with the oceans like hurricanes etc

Reply to  mikewaite
June 27, 2020 4:22 pm

“Neither CMIP5 nor 6 allow for the El Ninos that have dominated as abrupt deviations from 1999 to 2019. Why so?”
Models allow for El Ninos. But they don’t follow the same phase sequence. Weather happens, but is not synchronised. So when you average a number of models, the El Nino effect disappears in the average.

You’re right that the deviation in that plot is basically the “pause” from 1999 to 2014. And that was caused by two successive Nina’s. Such events are possible in models, but not with the same timing.

Dave Fair
Reply to  Nick Stokes
June 27, 2020 4:45 pm

Such La Ninas offset by two El Ninos.

Burl Henry
Reply to  Dave Fair
June 28, 2020 7:33 am

Dave Fair:

“Such La Ninas offset by two El Ninos.

This is usually correct. However, for the 1997-98 and 2015-2016 very strong El Ninos, they were offset by La Ninas

1997-98: Offset by 1998-99 La Nina (caused by VEI4 eruption of Soufriere Hills volcano May 26, 1997)

2015-16: Offset by 2016-18 La Ninas (caused by the VEI4 eruptions of Chikurachki Feb 16, 2015, Calbuco Apr 22, 2015, Wolf May 15, 2015, Kuchinoerabujima May 28, 2015, and Kliuchevoski Aug. 28, 2015)

For VEI4 eruptions, it generally takes about a year for their maximum cooling effect to occur.

Jim Gorman
Reply to  Nick Stokes
June 27, 2020 4:51 pm

“the El Nino effect disappears in the average.”

What else disappears in the average?

Dave Fair
Reply to  Jim Gorman
June 27, 2020 5:00 pm

The hindcasts do not even reflect known ENSO forcings!

Again, what caused the CMIP6 models’ 2005 downward blips? Could it be related to CMIP5 models’ beginning their forecasts in 2005 (switching from hindcasts)? Some sort of legacy problem?

Its elephant trunk wagging all the way down.

Reply to  Dave Fair
June 27, 2020 5:04 pm

“The hindcasts do not even reflect known ENSO forcings!”
What are those “ENSO forcings”?

Dave Fair
Reply to  Nick Stokes
June 27, 2020 5:15 pm

I’m not going to play UN IPCC bureaucratic semantic games, Nick. Known ENSO phenomenon affect global climatic metrics. Since those known phenomenon are well documented, why are their impacts not reflected in UN IPCC model outputs of global temperature profiles?

Reply to  Dave Fair
June 27, 2020 5:55 pm

“Since those known phenomenon are well documented”
A model is what it says. It is something that you can subject to forcings, and it will generate outcomes. You can’t usefully subject it to outcomes. If you can’t describe the forcings, you can’t get the model to respond.

Michael Jankowski
Reply to  Dave Fair
June 27, 2020 7:00 pm

“…If you can’t describe the forcings, you can’t get the model to respond…”

So a vital form of natural variation is missing. Sounds wonderful. We know how to do DEs and CFD, but…GIGO.

Reply to  Dave Fair
June 27, 2020 8:18 pm

“So a vital form of natural variation is missing.”
No. Models do very realistic ENSO, and at about the right frequency. It is part of the mechanics. They just aren’t synchronised.

GCMs predict climate, not weather.

michael hart
Reply to  Dave Fair
June 27, 2020 10:42 pm

Nick, can you point me to some literature describing when and how models first began to spontaneously produce el nino/la nina events without being forced to do so by the programmer?

Reply to  Dave Fair
June 28, 2020 12:10 am

“when and how models first began to spontaneously produce el nino/la nina events”
ENSO simply follows from getting the ocean/atmpsphere physics right. Here is a 2004 paper assessing how well the models of the day did it. The verdict is mixed, but by now they do it very well.

michael hart
Reply to  Dave Fair
July 2, 2020 2:27 pm

Thanks. Looks like it’s down to parameterizations again. I guess I didn’t really expect anything else.

Clyde Spencer
Reply to  Jim Gorman
June 27, 2020 8:40 pm

Jim
You asked “What else disappears in the average?” A reasonable estimate of the variance?

Jim Gorman
Reply to  Clyde Spencer
June 28, 2020 6:10 am

And uncertainty. The models do not inherently assess uncertainty in measurements nor distribution variance. Therefore you cannot get an envelope that portrays the range of possible values.

Doing so after the fact is a perfect recipe for confirmation bias and circular logic.

Pat Cusack
Reply to  Jim Gorman
June 28, 2020 2:18 am

Q. “What else disappears in the average (of temperatures)?”
A. The nature of the thing measured (temperature)! The average of two temperatures is NOT a “temperature”; it’s a statistic.

Jim Gorman
Reply to  Pat Cusack
June 28, 2020 6:20 am

Exactly. It starts with averaging Tmax and Tmin. Will subsequent averaging tell which one is changing, or perhaps both?

Averaging different distributions with different variances to find a Global Average Temperature is something a numbers person does. It is not a temperature it is a meaningless mean.

MarkW
Reply to  Nick Stokes
June 27, 2020 6:35 pm

“And that was caused by two successive Nina’s.”

There is no truth in your claim. The La Nina in 1999 was preceded by a massive El Nino, and the pause began before either. The La Nina near the end the pause doesn’t explain the lack of warming in the 15 years between the two La Ninas.

Reply to  MarkW
June 28, 2020 12:12 am

The big Nina’s were in 2008 and 2011/12. The El Nino in between (2010) was very weak.

Greg Sullivan
Reply to  mikewaite
June 27, 2020 11:08 pm

What’s the latest on the troposphere hot spot”? Has it happened yet?

Leitwolf
June 27, 2020 2:58 pm

Well, since the whole GHE narrative is a hoax, who cares about such models?

https://www.docdroid.net/phJh2cU/the-strange-nasa-map1-doc

June 27, 2020 3:42 pm

1. You used the lowest observations Hadcrut. BAD ROY.
The observations, like the models, have STRUCTURAL uncertainty. the way you account for that
is by showing the various observational series.
2 you didn’t mask for differences in coverage.
3 you didn’t compare apples for apples . ( you didn’t use SST from the models)
4 no trend uncertainty is shown.
5 same alignment error as your other work on model/observation comparisons

D-

Ktm
Reply to  Steven Mosher
June 27, 2020 4:17 pm

You make a fair point about cherry picking the Hadcrut dataset.

Roy should have compared against the very best most pristine air temperature dataset that exists, USCRN.

https://www.ncdc.noaa.gov/temp-and-precip/national-temperature-index/time-series?datasets%5B%5D=uscrn&parameter=anom-tavg&time_scale=p12&begyear=2005&endyear=2020&month=5

How good a job do you suppose CMIP6 models will do matching that, Mosher?

Reply to  Ktm
June 27, 2020 4:24 pm

USCRN measures a different location.

Ktm
Reply to  Nick Stokes
June 27, 2020 4:41 pm

Like Mosher says, just mask the models for differences in coverage.

I’m sure between you and Mosher and colleagues you each might know, someone could make it happen.

We want to use the best data available, right?

Chris Hanley
Reply to  Steven Mosher
June 27, 2020 4:38 pm

Here are four data set linear trends since 1979:
https://woodfortrees.org/plot/gistemp/from:1979/trend/offset:-0.12/plot/hadcrut4gl/from:1979/trend/plot/rss/trend/offset:0.23/plot/uah6/trend/offset:0.23
It looks like Dr Spencer has selected a middle plot.

Greg Locock
June 27, 2020 4:56 pm

When you plot these curves I think it is a great idea to separate the training data from the curves produced forward from that date.

I vaguely remember that CMIP5 were trained on data up until 1990, so not surprisingly they agreed quite well til then, and then lost the plot.

Dave Fair
Reply to  Greg Locock
June 27, 2020 5:05 pm

Related, Greg: On what date did the CMIP6 models switch from hindcasts to forecasts?

Waza
Reply to  Greg Locock
June 27, 2020 5:44 pm

Greg and David.
I have same question.
I actually thought the key date was 2000/2001 and the projections came out 2006 ( grace period).
So yes I would like to know the trained upto date for CMIP 5 & 6 and what was the deadline for submission or grace period date.

Reply to  Greg Locock
June 28, 2020 4:51 am

CMIP5 hindcasts ended with 2005, forecasting 2006 to 2100. CMIP6 hindcasts end with 2014, forecasting 2015 to 2100.

https://rclutz.wordpress.com/2020/01/26/climate-models-good-bad-and-ugly/

Ktm
Reply to  Ron Clutz
June 28, 2020 9:15 am

In that case, the USCRN dataset is the ideal challenge for the CMIP5 models.

They start forecasting in 2005, USCRN starts in 2005.

For CMIP6 I suppose they could use Roy’s approach to graph the best fit trend lines against USCRN and sync them at the 2005 starting point. Then we pay extra attention to any divergence from 2015 on.

waza
Reply to  Ron Clutz
June 28, 2020 6:09 pm

thanks Ron

Robert of Texas
June 27, 2020 5:20 pm

You make the argument “Ignoring the quality of the data, I show that the models still predict too high a temperature”. I argue one cannot make any kind of scientific analysis UNLESS the data quality is first addressed.

Low quality data that is repeatedly tampered with cannot form a basis for scientific discovery. It has to be qualified, adjusted if possible through the usual “here are the reasons, here are the methods, and here is the data before and after” scientific review process, and it’s faults (error margins, precision, accuracy) accounted for. If this is not possible, then the data is discarded for the purpose(s) it is unfit for.

There is a small amount of high quality data available – The U.S. Climate Reference Network (USCRN). One could also use rural stations that have not been contaminated by urban development. One MUST throw out all stations in urban areas as the heat pollution in those areas cannot be adequately historically quantified.

Using only the higher quality data, one finds that the computer projections are off far more than 50%. You are picking apart bad plots that use tricks to hide certain features, but it’s all based on bad data – so the entire plot needs to be discarded, not corrected.

Loren C. Wilson
Reply to  Robert of Texas
June 28, 2020 7:12 am

And please show the 1-sigma confidence interval for each model and for the measured data. And then explain why you can average the output of several models that use different physics and tuning to produce a meaningful result. This is malfeasance of the highest order.

June 27, 2020 5:46 pm

For 1979 to present… just use UAH. About +0.57°C. https://www.nsstc.uah.edu/climate/

For 2015 to present… just use Temp.global. About -0.01°C. http://temperature.global/

Models up near +1°C are hockey puck.

Rud Istvan
June 27, 2020 6:07 pm

SM and NS:
You both miss the BIG CMIP6 deal. Roy focusses on the ‘present’, because that is what his science does.

Now you should explain here factually the CMIP6 goofs in TCR and ECS, both more relevant (climate>30 year weather envelope). Low end climate models still ‘Unfortunately’ track history (INM-CM5). The upper half are yet more off the observational energy budget model rails. Way more off. Not off by 2x like CMIP5, now by 3x.

Waza
June 27, 2020 6:12 pm

Why are there squiggles in 2025 in the model average output?
And why is the up squiggle higher in 2025?

Michael Jankowski
June 27, 2020 7:05 pm

A post here by Dr. Spencer always equates to Viagra for Mosh.

Antero Ollila
June 27, 2020 7:40 pm

According to my observations, the official climate community has not paid almost all attention to a special occasion causing a temperature increase of 0.2 °C during 2018-19. The same reason was involved in the temperature effects of the super El Nino 2015-2016 increasing the surface temperature by about 50 %. This occasion has been a shortwave (SW) radiation anomaly. When solar insolation has been slightly decreasing since 2000, SW radiation has been in an upward mode since 2014.

Strange enough, Norman Loeb, the Ceres Science Team Leader, has published an article in 2018 with his colleagues and they confirmed this anomaly. They concluded that the reason for the SW radiation anomaly was the change in low-level cloudiness. They stopped there because it might come too close to the theory of Svensmark that cosmic rays may cause cloudiness changes.

If you subtract this 0.2 °C from the surface temperature trend in 2015-19, the pause of the 2000s continues.

Here is my press release about this finding:
https://www.climatexam.com/copy-of-press-release-jan-2020-engl

Izaak Walton
Reply to  Antero Ollila
June 27, 2020 9:32 pm

Antero,
Of course if you subtract the temperature increase from the trend it is not surprising that there is
a pause. And I suspect that if you subtracted more than 0.2 C then you might well find that temperatures
have been falling.

Antero Ollila
Reply to  Izaak Walton
June 28, 2020 12:53 am

I did not bend this from the barbed wire, because I assumed that anybody reading these blogs are aware of the basic facts in climate change science. The IPCC has not regarded the SW radiation to be part of climate change, not so far. Not also Dr. Spencer has realized this SW anomaly to be a cause of a 0.2 C increase since 2014.

Ther point is not that you just subtract something to get a result. There must be a scientific basis, and in this case, it is. The final outcome is that regardless of the increase of GH gas concentrations, they have not been able to increase the temperature. Have you any comments about this?

TheFinalNail
June 27, 2020 8:38 pm

“They plot anomalies about a common reference period, but do not show trend lines.”
_______________________

Can this quote really come from the same Roy Spencer who, month after month for the past decade plus, has updated a chart of the UAH_TLT anomaly series without ever illustrating its trendline? Yet here he is saying how much this ‘annoys’ him in the work of others.

Jeff Alberts
June 27, 2020 8:52 pm

“They look at long periods of time, say the last 100+ years. This improves the apparent agreement because most of that period was before there was substantial forcing of the climate system by increasing CO2.”

What evidence is there that there as been any “substantial forcing of the climate system by increasing CO2”?

Waza
June 27, 2020 9:19 pm

Dave has asked a fair question about the 2005 drop.
Any answers would be appreciated.

Redge
June 27, 2020 11:13 pm

It would seem the oft-heard cry of “IT’S WORSE THAN WE THOUGHT” is finally true.

stinkerp
June 28, 2020 12:54 am

Why did you use HadCRUT4, which is biased by poorly located stations and urban heat island effects? I would have thought your own UAH would give a more accurate reading of global temperatures.

Vincent Causey
June 28, 2020 1:51 am

The better they get, the worse they get.

Matt_S
June 28, 2020 2:08 am

“More Surface Warming than Observations since 1979”, well, more than adjusted surface warming.

Pat Smith
June 28, 2020 3:53 am

On 24 June, Nature published an article entitled ‘Five ways to ensure that models serve society: a manifesto’. It deals with the various models of the pandemic and suggests ways of making the process more transparent and providing greater confidence. Its sub-heading says much: ‘Pandemic politics highlight how predictions need to be transparent and humble to invite insight, not blame.’ Most of the points apply equally to climate change modelling.

https://www.nature.com/articles/d41586-020-01812-9?utm_source=Nature+Briefing&utm_campaign=6441803628-briefing-dy-20200626&utm_medium=email&utm_term=0_c9dfd39373-6441803628-44899189

Paul in uk
June 28, 2020 3:54 am

There is too much I don’t understand or know about all this: I suspect I’m thinking about this the wrong way, my thermodynamics not good enough, inadequate knowledge on modelling. Some questions, some of which others may have already touched on:

Presumably delta temperature alone is an inadequate measure of the thermodynamic system as energy could be going to other forms not just temperature. Similarly is a global average any use when presumably two instances of the same global average value (such as temperature) could have a very different spread of that value around the globe and hence be very different thermodynamically. i.e. isn’t this a bit like looking at a multidimensional problem in only a few of those dimensions and therefore meaningless?

If we say, e.g. that over a certain period the heat is going into the oceans how do we include that in the models? Do we know the mechanism by which it is going into the oceans so the models represent that impact on surface temp throughout the comparison period?

Presumably, what is included in the models and how continually changes over the years as we improve models. Presumably the further back in time we go the more we do not have the same data measurements or same accuracy we have today as presumably we keep adding to what we measure as we realise it’s importance and add ways to measure it and/or improve or change, not just temperature but many factors.

Presumably more recent models should be better but to compare apples with apples the period over which we can do a justifiable comparison reduces. Is that why we are starting at 1979? When I’m seeing models vs measured going back a hundred years or so has that been adequately allowed for?

Glad I’m not a modeller, my brain would explode: isn’t the way to do it to take all the relevant data over the whole globe at an instant in time (e.g. perhaps we could do that from the weather forecast model) and a delta time interval before and after and understand the thermodynamic status and trends. But then my mind struggles with how we cope with jet streams, stratosphere etc which I presume have impacts on the weather (thermodynamics). Meanwhile have another type of model built up entirely from first principles, no fudge factors or basic equations for forcings calculated from observations etc. Is that how it’s done, I don’t know but assumed it was like a weather forecast but with fudge factors and equations for forcings etc worked out from observations.

The other thing I don’t understand is if we’re so confident yet we live through significant diversions between modelled and observed, that is, not only do we not forecast them, but despite having all the data come in there and then can only explain them several years later, why not at the time; doesn’t that suggest we are always on the back foot and not actually adequately understanding and potentially effectively driving the output around the bends rather than let it drive itself?

Jim Gorman
Reply to  Paul in uk
June 28, 2020 6:50 am

You are asking if temperature is a good proxy for energy in the atmosphere. It’s a good question. One has to wonder why there is little if any research into making an energy model that includes all the various forms of heat. Temperatures appear to be easier numbers to deal with and play with. As such they are worth less too.

Paul in uk
Reply to  Jim Gorman
June 28, 2020 3:23 pm

Many thanks Jim, in that case I don’t understand why we are not doing so (30 years ago that is what I assumed a climate model was) and for current (GCM) models I can’t understand yet what it really means when we’re only looking at an average temperature graph, but this is not my area.

Wolf at the door
June 28, 2020 5:31 am

In one way it’s quite simple.If you think that large temperature rise has caused
Sea level rise accelerating
More frequent and more severe ” weather disasters ”
Less food prouction
Dangerous melting of Arctic and Antartic regions ,over the past 40 or so years ,compared to other (quite recent) periods of the earth’s history then you don’t have a large temperature rise.

Bill Rocks
Reply to  Wolf at the door
June 28, 2020 3:22 pm

Watd,

Roger that. Over and out.

June 28, 2020 4:27 pm

Since 2015 guess what the global T anomaly is… based on 14.0°C baseline zero: Exactly 0.0°C based on over 60000 continuous worldwide observation sites. Use down arrow to see graph and data sources.

http://temperature.global/

Paul in uk
June 29, 2020 11:21 am

I’m finding the problem with this site is that in a way it is too good; meaning that with so many new posts each day within a couple of days most discussions stop.

Discussions like this one I think are most important and productive as it seems some people who can explain the view from within climate science are joining in, but either because of the above, or other reasons frustratingly there seem to still be questions raised in these sorts of discussions they haven’t answered or issues not adequately discussed.

Perhaps it could be more productive if:
A) Such important discussions weren’t so quickly lost in the forest of other posts so participants don’t disappear before clarifying or answering questions.
B) Some sort of summary added, periodically updated that summarises important points raised, the answers or conclusion, indicating if that issue can be closed, and listing important questions not adequately answered or discussed.

I know this may sound like a lot of work, but hopefully it could mean such discussions are much more productive, forcing answers and acting as a reference. My initial thought is a page showing, say the latest 6 important discussions, an archive as they drop off into various categories, but keep them open so the discussion can continue and if enough good new information or questions raised show them on another page showing the top 3 or 4 that need to be looked at again so hopefully people with suitable knowledge will add relevant comments, answers, etc.

Unfortunately too much of this is over my head or comments too short for me to adequately understand otherwise I’d attempt the kind of summary I mean. Similarly I’m struggling with summarising or deciding what I think are the important questions that need to be addressed by someone, and I may have missed the answers.

QUESTIONS NEEDING ANSWERS or more discussion?
1) The validity of using average temperature.
2) The validity of using temperature as proxy for energy.
3) The 2005 step.
Probably more than that.

Alasdair Fairbairn
June 30, 2020 6:16 am

The IPCC was originally set up to assess the risks of anthropological CO2 emissions. Had the IPCC concluded that there was insignificant risk it would have been closed down . It is not surprising therefore that high risks were found and that subsequent findings provide increasing support for this conclusion. Neither is it surprising that any challenge to this position would be rigorously suppressed and ignored.

Conspiracy theory or not, there is little doubt that there is a strong conflict of interest here.

John Bruyn
July 1, 2020 5:09 am

Thanks for pointing that out Charles. But you are still dead wrong when asserting that CO2 has been forcing the climate instead of the climate forcing the CO2 increase if any, as the 100 ppm rate of increase over 60 years fits very well with the reduction in the eccentricity of Earth’s orbits over the next 10,000 years or so increasing the speed of Earth’s rotation to conserve angular momentum. Detrending of the annual Mauna Loa CO2 values actually shows how they relate to the Jupiter and Saturn orbital cycles.

%d bloggers like this: