Climate Alarmists Respond to the Global Warming Pause

Essay by Eric Worrall

The oceans swallowed my global warming? Desperate butt covering from alarmists who are facing increasingly embarrassing questions about the failure of the world to end.

14 July 2022  16:41

Factcheck: No, global warming has not ‘paused’ over the past eight years

A decade ago, many in the climate community were fixated on an apparent “pause” in rising global surface temperatures. So many studies were published on the so-called “hiatus” that scientists jokedthat the journal Nature Climate Change should be renamed Nature Hiatus. 

However, after a decade or so of slower-than-average warming, rapid temperature rise returned in 2015-16 and global temperatures have since remained quite warm. The last eight years are the warmest eight years since records began in the mid-1800s.

While the hiatus debate generated a lot of useful research on short-term temperature variability, it is clear now that it was a small variation on a relentlessly upward trend in temperatures.

But nearly a decade later, talk of a “pause” has re-emerged among climate sceptics, with columnist Melanie Phillips claiming in the Times this week that, “contrary to the dogma which holds that a rise in carbon dioxide inescapably heats up the atmosphere, global temperature has embarrassingly flatlined for more than seven years even as CO2 levels have risen”.

This falsehood appears to be sourced from a blog post by long-time climate sceptic Christopher Monckton, which claims to highlight the lack of a trend in global temperatures over the past eight years.

In a rebuttal letter to the TimesProf Richard Betts – head of climate impacts research at the Met Office Hadley Centre and University of Exeter – points out that it is “fully expected that there will be peaks of particularly high temperatures followed by a few less hot years before the next new record year”.

In fact, the last eight years have been unusually warm – even warmer than expected given the long-term rate of temperature increases – with global temperatures exceeding 1.2C above pre-industrial levels. The temperature record is replete with short-term periods of slower or more rapid warming than average, driven by natural variability on top of the warming from human emissions of CO2 and other greenhouse gases. 

There is no evidence that the past eight years were in any way unusual and the hype around – and obvious end of – the prior “pause” should provide a cautionary tale about overinterpreting year-to-year variability today.

Human-emitted greenhouse gases trap extra heat in the atmosphere. While some of this heat warms the Earth’s surface, the vast majority – around of 93% – goes into the oceans. Only 1% or so accumulates in the atmosphere and the remainder ends up warming the land and melting ice. 

Most years set a new record for ocean heat content, reflecting the continued trapping of heat by greenhouse gases in the atmosphere. The figure below shows that annual OHC estimates between 1950 and present for both the upper 700m (light blue) and 700m-2000m (dark blue) depths of the ocean.

Read more: https://www.carbonbrief.org/factcheck-no-global-warming-has-not-paused-over-the-past-eight-years/

Lord Moncton apparently stirred the hive by publishing a few articles on the growing pause, like this article from three weeks ago.

His article on the last 6 years are entertaining because, where’s the warming? Wasn’t there supposed to be a hockey stick or something? Oh yeah, it disappeared into the ocean depths, allegedly.

The last 172 years, since 1850, temperatures have risen a little. Except for that period between the 1940s to 1970s, when the drop in global temperature triggered climate scientists like Stephen Schneider to suggest we should use nuclear reactors to melt the polar ice, to prevent an ice age. Schneider later claimed he’d made a mistake, and went on to become a global warming activist.

But that context doesn’t stop in 1850.

Looking before 1850, there were notable warm periods during the last few thousand years, like the medieval warm period, Roman Warm Period and Minoan Warm Period, which look suspiciously like our current modern warm period, except back then people didn’t drive automobiles.

Going back further, 9000-5000 years ago, during the Holocene Optimum, the sea level was around 2m higher than today, so it was probably pretty warm back then as well.

20,000 years ago, much of the world was covered by massive ice sheets.

Three million years ago, the world was so warm Antarctica was mostly ice free – until the onset of the Quaternary glaciation, which we are still enduring today. To put the Quaternary Glaciation into context, the Quaternary is one of only five comparable great cold periods which have been identified over the last two billion years.

55 million years ago was the Palaeocene – Eocene thermal maximum, an extremely warm period of such abundance our primate ancestors spread throughout much of the world.

When you take a more complete look at the context, rather than the limited 172 year / 0.0000086% of climate history Carbon Brief seems to want you to focus on, there is nothing unusually warm about today’s global temperatures. Even if further global warming does occur, if those little primate ancestors with walnut size brains could manage to thrive in the Palaeocene – Eocene thermal maximum, I’m pretty sure we could figure out how to cope with a small fraction of the warming they enjoyed.

4.8 63 votes
Article Rating
471 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Scissor
July 21, 2022 10:02 am

Whatever happens, they make something up, like “safe and effective.”

Matt G
Reply to  Scissor
July 21, 2022 10:34 am

The Arctic cooling for the first time since the mid-1970’s could mean the second pause is not temporary this time.

Derg
Reply to  Scissor
July 21, 2022 11:57 am

Poor ole Joe wasn’t double vaccinated and triple boosted.

Trying to Play Nice
Reply to  Derg
July 21, 2022 1:54 pm

Or the vaccine just is useless.

Neo
Reply to  Trying to Play Nice
July 21, 2022 1:55 pm

Dr Birx says 4 months

Cosmic
Reply to  Trying to Play Nice
July 21, 2022 3:49 pm

I never again will be hoodwinked by the cdc and these stupid ‘shots’. I don’t like being hoodwinked.

Reply to  Derg
July 22, 2022 7:07 am

The vaccines do not prevent infections and are useless for Omicron, which is a very different, and much less deadly, disease than Covid/

If you recover in 2 to 7 days, you most likely had Omicron.
If you recover in 1 to 3 weeks, or possibly die, you most likely had Covid.

There is no evidence in US all-cause mortality data that vaccines reduced deaths in 2021, versus 2020 with no vaccines.

It is very difficult to determine if vaccines reduced hospitalizations in 2021. Up to half of “Covid patients” in a NYC hospital survey actually went to the hospital for another unrelated disease, not Covid. There were strong financial incentives for them to be called Covid patients after they entered the hospital for other reasons.

If you trust the deaths with Covid statistic (actually any death within 28 days of a Covid positive PCR test, from any cause) the deaths with Covid in 2021, with vaccines, were much higher than in 2020, with no vaccines.

There is strong evidence that Covid vaccines were not safe and were not effective. The root cause is most likely the extremely rushed development time financed by the Trump Administration. There is a good reason that vaccines usually take 5 to 15 years for development and testing, rather than nine months.

Fred the Head
Reply to  Richard Greene
July 24, 2022 11:09 am

The lethality of COVID was pure propaganda. During the shamdemic no one apparently died from influenza which is an epidemiological impossibility. I propose that who did perish from infection (COVID, influenza, or other agent) suffered Vitamin D hypovitaminosis. This phenomenon has been observed for decades.

Derg
Reply to  Scissor
July 21, 2022 11:57 am

But then again he could be like the others and just got saline.

Sparko
Reply to  Scissor
July 21, 2022 2:29 pm

Just a heads up, but they’re going to invent more “warming” in the oceans.

OweninGA
Reply to  Sparko
July 21, 2022 3:48 pm

easy enough to do – just remove the coolest 10% of Argo floats due to “anomalous readings”, and presto change-o a new ocean warming appears before your eyes.

Sparko
Reply to  OweninGA
July 22, 2022 9:59 am

I mean there’s a new paper out. Trenberth and Tamino are involved. It’s going to be karlization of the Argo set.

Reply to  OweninGA
July 24, 2022 8:34 am

just remove the coolest 10% of Argo floats due to “anomalous readings”

Like Hansen did culling temperature stations to remove rural and higher altitude inconvenient temperature stations?

Mr.
Reply to  Sparko
July 21, 2022 5:13 pm

Here’s evidence of what CNN is plotting to do to “milk” the global warming thing because COVID is losing fear factor impact with the public.

Utterly disgusting –

https://twitter.com/i/status/1549770683890253825

John Tillman
Reply to  Scissor
July 21, 2022 6:04 pm

The Antarctic was far from ice-free 3 Ma, but the Arctic largely was.

The Cenozoic Ice Age began with formation of the East Antarctic Ice Sheet 34 Ma, with the formation of the Southern Ocean. Northern Hemisphere ice sheets did have to await the closing of the Isthmus of Panama about 3 Ma.

Reply to  Scissor
July 22, 2022 8:31 am

Don’t forget “common sense” solutions…

Tom Halla
July 21, 2022 10:06 am

What got me involved in following climate change was the attempt with MBH98 to make the LIA and Medieval Warm period go away, and pretend all climate change is anthropogenic. Mann has still not retracted that atrocity.

Reply to  Tom Halla
July 21, 2022 10:22 am

Mann wants to mann-u-facture climate history….what a hockey puck!

MarkW
Reply to  Tom Halla
July 21, 2022 11:39 am

Various alarmists still try to claim that nobody has refuted the Hockey Stick.

Dave Andrews
Reply to  MarkW
July 22, 2022 8:09 am

Tom Wigley and Keith Briffa were sceptical as shown in the Climate Gate e-mail release

Wigley “I have just read the M&M stuff criticising MBH. A lot of it seems valid to me. At the very least MBH is a very sloppy piece of work – an opinion I have held for some time”

Briffa “I believe the recent warmth was probably matched about 1000 years ago”

JCM
July 21, 2022 10:16 am

Dennis says we’re in another “hiatus”, in Dessler’s ECS & Cloud Feedback symposium at this point in the video, posted earlier in April 2022. Nobody argued. It is being discussed by the academics and it’s widely acknowledged.

https://youtu.be/aQznFJ9eVrk?t=2187

JCM
Reply to  JCM
July 21, 2022 10:48 am

Sorry, Dennis Hartmann. Schmidt chimes in shortly after and points out his southern ocean cooling observations.

Gary Pearse
Reply to  JCM
July 21, 2022 2:03 pm

“and obvious end of – the prior “pause” should provide a cautionary tale about overinterpreting year-to-year variability today.”

Betts is one of the several dozen climate spinners on the list of those who will be sharing in the blame for the mounting worst policy-generated global economic crisis the world has known (eclipsing that of the 20th century monsters). The trillions wasted on what is a totally failed false front science used to enable a néomarxiste Great Reset that will cost many trillions more of precious funds needed to repair every facet of our trashed civilization – our education system from K to phony PhD and hundreds of other things.

Before the middle of the first decade of the new millennium I think in the main, Climate Science was honest, if misguided in their beliefs. After the total failure of climate predictions, the continual moving of goalposts, fiddling of climate data, blocking publication of adversarial climate papers, and dirty tricks like pushing the real 20th century temperature highstand of the 1930s-40s down to make 1998 el Niño the new record, etc, were prima facie criminal activity to cling to the broken theory and preserve the cash trough. You even cajoled a failing prime minister to steal another Billion from the taxpayer for climate in her way out of office.

Shame, shame on you and your co-conspirator colleagues. Oh, and you also know that it was an el Niño year that interrupted the 18yr+ “Dreaded Pause”, now apparently resuming. You know Gavin Schmidt stated last year that climate models are “running a way too hot, and we don’t know why. You and Gavin do know why! A wiseman these days would be seriously considering cementing in his pension in these turbulent times.

Reply to  Gary Pearse
July 21, 2022 3:21 pm

The very use of “anomalies” is a fraud of its own. Tell us the 5-number descriptor of the values: minimum, first quartile, median, third quartile, and maximum of the data used to calculate these multi-year averages and then the anomaly of the current average compared to the historical average.

If the average absolute global temp is 15C and we have a .13C/decade increase that’s a 0.9% change. Something no one could possibly consider to be “catastrophic”!

Add to that a MINIMUM possible uncertainty of at least +/- 0.5C and you couldn’t even tell if the change was or wasn’t 0.13C over a decade!

Pop Piasa
Reply to  Tim Gorman
July 21, 2022 4:13 pm

It’s all how you spin it (as Popper and Feynman spin in their graves). This is truly the age of neo-deception and the entrapment of humanity by the technocracy.

H B
Reply to  Tim Gorman
July 21, 2022 4:48 pm

Put it in Kelvins and it is even less

paul courtney
Reply to  Tim Gorman
July 21, 2022 5:37 pm

Mr. Gorman: I’ve read the arguments re: anomalies, and I think there may be a good faith use for such math devices. But this is Climate Science, so there is no chance this is a good faith use!

Reply to  paul courtney
July 22, 2022 6:49 am

Using anomalies is fine if *all* the rules are followed. But variance and uncertainty attach to the anomalies in the same magnitude as they do to the absolutes.

The climate crowd neither specifies the variance of their data or its uncertainty. They substitute average residual value between the stated data and the linear trend line as its “uncertainty*. It isn’t uncertainty, it is a best-fit metric and ignores the uncertainty associated with the measurement stated values.

They try to justify the use of anomalies as a way to weight all date equally but that’s just the beginning of the fraud. Since winter temps have a larger variance than summer temps, jamming them all together using anomalies makes no sense at all, especially when they don’t bother to even quote what the variance of the actual data is! Not only that but jamming winter temps and summer temps in the same month (i.e. northern hemisphere temps in July with southern hemisphere temps in July) together generates at least a bi-modal distribution resulting in an average value that is meaningless – and it should be recognized as such by a trained statistician.

I simply don’t trust *anything* put out by most climate scientists today, be it pro-CAGW or anti-CAGW. I only trust actual plots of absolute temps and even then am skeptical if the author doesn’t recognize the uncertainties associated with those temperature measurements. When almost all temperature measurement devices, both old and new, have at least an uncertainty of +/- 0.5C it is impossible to know exactly what is happening with the climate in the hundredths digit. Averaging measurements from different things using different devices does *not* guarantee a normal distribution of measurements where uncertainty cancels – i.e. averaging temps on a global basis doesn’t decrease uncertainty in the result, it only grows it.

Bill Rocks
Reply to  Tim Gorman
July 22, 2022 9:55 am

Yes.

Gary Pearse
Reply to  Tim Gorman
July 22, 2022 6:42 pm

I agree. When questioned on it though, the argument is ‘well we don’t exactly know global average T exactly so we use anomalies which we can measure more accurately.’ I don’t see how when the same problems of calculating the average T plague the measurement of avg global anomalies.

But measuring as anomalies does invite fiddling temperatures which is an ‘extra degree of freedom’ for cooking data. It also allows easy use of algorithms for making hundreds of small adjustments daily.

When you argue a point regarding apparent artifacts in the data, the response is usually ‘O that’s because of a station move adjustment. Over a long time looking at T data, I noted some station moves for sites that weren’t showing warming and it became clear that station moves had become part of the T fiddler’s toolbox. The classic example was discontinuing the Death Valley Ranch Station because it stubbornly held onto the world’s highest temperature set in 1907 over 100 years later.
They moved the site a few miles down the road with heat enhancing microclimate. In 2021 they announced a ‘new’ world record which wa still a few degrees cooler than the old ranch one. Not knowing the back story, the world was stricken by this ‘warming’.

Reply to  Tim Gorman
July 24, 2022 9:09 am

The very use of “anomalies” is a fraud of its own.”

“Anomalies” without identifying context and error bounds.
Alarmists ignore the fact that smudged temperature adjustments and infilled temperatures increase error bounds.

On top of that alleged climate “experts”, “average” thousands of unique individual temperature measurements.

Greg B
Reply to  Tim Gorman
July 31, 2022 6:46 pm

“If the average absolute global temp is 15C and we have a .13C/decade increase that’s a 0.9% change. Something no one could possibly consider to be “catastrophic”!”

Assuming the temperature at which water turns to ice is 0.

Reply to  Gary Pearse
July 21, 2022 8:43 pm

Considering the efforts to convert private pensions to public funds, there may be little security there.

MARTIN BRUMBY
Reply to  Gary Pearse
July 21, 2022 9:37 pm

Betts has a history of telling climate lies that makes BoJo look like George Washington.

July 21, 2022 10:23 am

 Except for that period between the 1940s to 1970s, when the drop in global temperature …
_____________________________________________________________

A simple plot of the January-December Annual Mean from GISTEMP LOTI shows that Temperature also dropped from about 1880 to 1910 which is never explained.

Reply to  Steve Case
July 21, 2022 10:38 am

1880 to 1910 is not global and is not accurate

Crispin Pemberton-Pigott
Reply to  Richard Greene
July 21, 2022 11:45 am

Richard:

You can argue that it is not accurate but you cannot show that it was not global. Based on the fact that evidence speaks for itself, we can at least say there is measurement evidence of cooling from 1880-1910. Proof of global cooling in the form of widely dispersed glacier terminations is still obtainable.

It doesn’t matter if cooling cannot (now) be proven to be global. Asserting that evidence from one region is not proof of global cooling is a risky bet. Evidence from the only region with measurements is indicative of the whole, absent any contrary evidence, and you offered none.

Claiming that 93% of the heating (that is supposed to exist) is going into the oceans is downright humorous. The claim from Denver is that the deep oceans are warming. Proving that is basically impossible. Essentially they say the claim stands because there is no definitive proof the claim is not true. That is like the EPA claiming that all airborne particulate matter from any source is equally toxic (the equitoxicity rule) because they “have no proof that it is not”. I kid you not, that is what they said.

I think you should stay clear of statements about what was not global without evidence. Stick to claiming the proof of “globality” is not in evidence. Then you can sit in judgement of evidence as it appears. If you have credibility, people will listen to your sage assessments.

Reply to  Crispin Pemberton-Pigott
July 21, 2022 12:11 pm

Evidence from the only region with measurements is indicative of the whole, absent any contrary evidence …
____________________________________________________

Excellently stated

Reply to  Steve Case
July 21, 2022 12:44 pm

Excellently stated nonsense.
Bizarre logic
Saying ‘I’m right because you can’t prove me wrong’.

That’s not how science works
Science requires data.
Data quality must be analyzed to determine if there are sufficient data, with sufficient accuracy, to draw a conclusion. The global average temperature before 1920 is a rough estimate of the Northern Hemisphere only. Not an accurate global average.

Drake
Reply to  Richard Greene
July 21, 2022 1:35 pm

And your statement that it was not global is backed by NO EVIDENCE and is thus not scientific.

So assuming that when ALL the areas with temperature records show warming, that the rest of the world does not is typical Richard Greene.

Since you seem to comment at cross purposes every day, I would like to know why you are commenting at all.

Are you just a troll?

Reply to  Drake
July 21, 2022 4:27 pm

The following charts show locations of land based weather stations in various years. They have poor global coverage before 1920 and are still not so great today.

Honest global warming chart Blog: Poor distribution of land-based weather stations (elonionbloggle.blogspot.com)

Cosmic
Reply to  Richard Greene
July 21, 2022 3:56 pm

Surface data in general, even today is a ‘rough estimate’

Pop Piasa
Reply to  Cosmic
July 21, 2022 4:27 pm

Certainly rough, when you consider that most of the atmosphere at Earth’s surface is over ocean. There we only have satellite data for the most part there. No thermometers next to runways.

Angus
Reply to  Richard Greene
July 21, 2022 10:11 pm

Well stated Richard. So we don’t need to worry about 2°C above preindustrial because we do not know the preindustrial temperature.

Reply to  Angus
July 22, 2022 7:11 am

Central England’s three weather stations already had well over +2 degrees C. warming since the very cold 1690s. Closer to +3 degrees C.
People living there in the 1690s would have loved today’s much more pleasant temperatures.

JD Lunkerman
Reply to  Richard Greene
July 22, 2022 7:56 am

“People living there in the 1690’s would have loved today’s much more pleasant temperatures.” Exactly. Today’s temperatures are pleasant and life sustaining. Get your hand off the alarm button.

Reply to  Crispin Pemberton-Pigott
July 21, 2022 12:36 pm

There were few measurements in the Southern Hemisphere before 1920 and ocean measurements were mainly in Northern Hemisphere shipping lanes with questionable accuracy of buckets and thermometers.

 Evidence from the only region with measurements is indicative of the whole, absent any contrary evidence, and you offered none.”

Anti-science baloney
Sparse coverage of the Northern Hemisphere is not a global average. Bucket and thermometer measurement methodology makes the problem worse.

John Tillman
Reply to  Richard Greene
July 21, 2022 6:51 pm

Global coverage by well-maintained and sited weather stations was better during the British Henisphere than after its demise, 1948-64, especially in the Southern Hemisphere. No continuous South Pole data however until 1958.

john harmsworth
Reply to  Richard Greene
July 21, 2022 12:14 pm

If we’re concerned about accuracy, no global readings before the satellite era make the cut, and even those are debatable. That includes Ice proxies and Mann’s treemometer concoctions and any other numbers you want to discuss. So what were we getting worked up about? Greta’s feelings?

Reply to  john harmsworth
July 21, 2022 12:47 pm

I have no idea why people are getting worked up
about a prediction of climate doom. I guess they
have blind trust in leftist politicians and government bureaucrats,

We love global warming here in Michigan USA.
Give us more of that.

John Bell
Reply to  Richard Greene
July 21, 2022 2:46 pm

Yup! Sure has been hot and dry here in SE Michigan this summer, I want some rain, must be the La nina?

Reply to  john harmsworth
July 21, 2022 4:37 pm

We have something better than a global average temperature that NOT ONE PERSON lives in:

We have almost 8 billion first-hand witnesses who have lived with up to 47 years of global warming. If you were born after 1974 you have experienced global warming for your entire life.
I bet a lot of people barely noticed. No one was harmed.

We have an advantage here in Michigan where we have llved in the same home since 1987, and four miles south for 10 years before that. Living in one place makes it easier to notice a small gradual change in the climate. Our winters are generally not as cold as they were in the1970s. And last winter had the least amount of snow of any winter since 1977. So not only have we been able to notice the mild global warming, but we also love it. And we don’t need an average temperature number to tell us how climate change actually affected us.

Don Perry
Reply to  Richard Greene
July 22, 2022 4:58 am

I’ve lived 80 years and see clearly that the warming and cooling are cyclical. What I’m experiencing this year is bringing back memories of the 1950s and your choosing 1974 brings back memories of the extreme snow and cold I experience then. It’s no warmer now than what I experienced repeatedly. I’ve lived in the same spot since 1968 and I see the same ups and downs decade to decade.

Reply to  Don Perry
July 22, 2022 7:14 am

But our first-hand experiences with actual climate change over many decades don’t count because we are not climate scientists with supercomputers and climate models. And even worse, we never make scary predictions of the future climate !

Felix
Reply to  Richard Greene
July 21, 2022 12:16 pm

But 1850 is ok?

Reply to  Felix
July 21, 2022 12:50 pm

Other than saying the climate has warmed since the Maunder Minimum period in the late 1600s, we do not have accurate global data until UAH in 1979. Surface data after WWII could have been reasonably accurate — still too much infilling — but has had huge arbitrary adjustments for the 1940 to 1975 period. Can’t trust those climate bureaucrats.

Reply to  Richard Greene
July 21, 2022 2:36 pm

I wouldn’t put too much faith in UAH. We truly have no idea what the uncertainty of UAH is. It doesn’t even measure temperature but atmospheric emissions. Those are then converted to temperature. No idea of what the uncertainties are associated with the emission measurements and no idea of the uncertainties of the conversion algorithms.

All we really get is the average residual between the UAH measurements and the UAH average trend line and that is called an “uncertainty”. It is actually a best-fit metric for linear trend lines and the stated values, it is not measurement uncertainty.

Chris Hanley
Reply to  Tim Gorman
July 21, 2022 3:29 pm
Reply to  Tim Gorman
July 21, 2022 4:39 pm

It’s safe to assume the average temperature either went up or down since 1979. Every data source says up. I say up too.

Chris Hanley
Reply to  Richard Greene
July 21, 2022 3:17 pm

I think it was the late Prof Bob Carter who made the point that the uncertainty range of the supposed global thermometer record from ~1850 was greater than the claimed temperature rise since then.

Cosmic
Reply to  Richard Greene
July 21, 2022 3:53 pm

Gee thanks know-it-all. Quit thinking you or me have ANYTHING to do with earthly temp changes besides ill-placed thermometers.

Reply to  Steve Case
July 21, 2022 11:10 am

Krakatoa erupted in 1883 and apparently injected so much sulfur and ash into the upper atmosphere that it caused a global cooling event that lasted well into the 20th century. (1:25.00 in the video)

https://www.youtube.com/watch?v=yCXSDzo0tLg

They don’t quantify how much cooler, or how far exactly “well into the 20th century” means, but I’m sure it was a contributory factor.

Reply to  Right-Handed Shark
July 21, 2022 11:24 am

A volcano affecting the global climate for over 17 years?
I don’t believe it.

Reply to  Richard Greene
July 21, 2022 11:30 am

Science isn’t about belief.

Reply to  Retired_Engineer_Jim
July 21, 2022 11:42 am

Volcanoes can affect the climate for a few years.
Not 17+ years.
I know baloney when I read it.
And you piled on with a meaningless character attack.

Drake
Reply to  Richard Greene
July 21, 2022 1:40 pm

Troll on Richard.

In one comment tell someone else they are unscientific for not providing “data”, then in this post, you say, without any proof, that “I know baloney when I read it”. without ANY data.

You ARE just a troll.

Reply to  Drake
July 21, 2022 4:41 pm

Find one scientist who even believes any one modern era volcano affected the global climate for 17+ years

john harmsworth
Reply to  Retired_Engineer_Jim
July 21, 2022 12:23 pm

And Climate “science” ain’t about science. Hence the autistic girl as spiritual leader and chastiser-in-chief.

MarkW
Reply to  Richard Greene
July 21, 2022 11:42 am

If the cooling is enough to cause ice fields to increase in size. Then even after the sulfur drops out of the atmosphere, it the ice fields will continue to cool the planet until they finish melting.

Reply to  MarkW
July 21, 2022 4:44 pm

SO2 emissions have already been reduced by 75% to 80% since 1970. There was global cooling with high levels of SO2 before 1975, and global warming with high levels of SO2 after 1975. Conclusion: SO2 is a minor climate variable.

MarkW
Reply to  Richard Greene
July 21, 2022 8:26 pm

Where the SO2 is makes all the difference. Emissions from power plants and such stay in the lower atmosphere and are washed out of the atmosphere quickly. Volcanoes, especially the big ones, put SO2 into the upper atmosphere where they can stick around for years.

Trying to judge what impact volcanoes have by looking at power plant emissions is a fool’s errand.

Crispin Pemberton-Pigott
Reply to  Richard Greene
July 21, 2022 11:46 am

Cooling is expected to be 2-3 years. However, maybe it tripped a tipping point. Tipping points are all the rage these days. Why not in 1886?

Reply to  Crispin Pemberton-Pigott
July 21, 2022 12:10 pm

Crispin Pemberton-Piggot:

For a VEI4 eruption, it takes at least 5 years before their emissions fully settle out of the atmosphere. After that time, temperatures begin to rise because of the fully cleansed air.

For larger eruptions, more time is required, 15 years, or more, for a VEI7..

Reply to  burl Henry
July 21, 2022 4:45 pm

The only unambiguous VEI-7 eruption to have been directly observed in recorded history was Mount Tambora in 1815 and caused the Year Without a Summer in 1816.

john harmsworth
Reply to  Crispin Pemberton-Pigott
July 21, 2022 1:10 pm

Not out of the question, but much larger volcanoes have blown off in the last few thousand years. I believe that the Arctic ocean is the driver of long-tsrm swings in temperature. When ice extent is high, it insolates the ocean from giving up heat and also increases albedo. contributing to keeping the extent at a higher level. Over time, however the ocean wars under the ice, from water intrudi8ng from the Pacific and Atlantic. Eventually, the ice begins to thin from unde3rneath. As it breaks up, wind becomes a factor, cooling the surface and mixing the surface layer. Cooling increases as albedo drops, so it takes some time for the ice to get back to minimum. As apart of this process, the Arctic air tempe3ratures swing for continental cold toward a more marine state, with follow on effects globally on temperatures.

cgh
Reply to  Crispin Pemberton-Pigott
July 21, 2022 4:55 pm

Or the even larger event in 1815.

john harmsworth
Reply to  Richard Greene
July 21, 2022 12:18 pm

Pretty sure that would mean that an individual volcano has to be averaged into the effects of all the much smaller ones. I don’t see how you avoid that conclusion, which reduces the short term effect of the larger volcano. Honestly, 50 years of study and they can’t qualify the effect of anything? Except that whatever happens must make it worse. This is what we’re calling science these days?

cgh
Reply to  Right-Handed Shark
July 21, 2022 4:54 pm

Krakatoa is not the only such event in recent past history. The explosion of Mount Tambora in 1815 produced a huge drop in insolation for years after its event. !*16 was known as ‘the year without summer’. And it ushered in years of crop disasters on a global basis. It’s estimated by some to have reduced the world’s average temperature by at least half a degree C.

Reply to  cgh
July 22, 2022 1:44 am

It’s interesting that the last London Frost Fair of January 1814 predate the Tambora eruption of 5 April 1815 by over a year. The winter of 1813-14 is regarded as one of Europe’s coldest. Napoleon had problems with supplies because of that cold winter. So the eruption was given a headstart in Europe at least

Reply to  Right-Handed Shark
July 21, 2022 10:20 pm

Not in Australia – we had the millennium drought along with our highest temperatures recorded during the 1890s. It was that hot the birds fell out of the sky and citizens were sent by train to the tablelands closer to Sydney. Some 45 people died that night in January 1896 in the Bourke area in western NSW. Now that is hot

Reply to  Right-Handed Shark
July 24, 2022 10:20 am

Krakatoa erupted in 1883 and apparently injected so much sulfur and ash into the upper atmosphere that it caused a global cooling event that lasted well into the 20th century. “

Willis disproved this claim several times.
e.g.,
Stacking Up Volcanoes‘ – Willis Eschenbach
Krakatau barely affected temperatures for several months.
comment image

Reply to  Steve Case
July 21, 2022 11:40 am

Steve Case:

The temperature drop was due to volcanic eruptions in 1880 (VEI4?), two in in 1883 (VEI4 and VEI6, three in 1886 (VEI4, VEI5, and VEI4?), six VEI4 between 1888 and 1889, four in 1902 (VEI4, VEI4?, VEI4, VEI5?), and four between 1903 and 1907 (VEI4, VEI4, VEI4?, VEI5).

That was a period when the atmosphere was well polluted with dimming volcanic SO2 aerosols, as well as Industrial SO2, which rose from 9 Megatons in 1880 to 32 Megatons in 1910.

Reply to  burl Henry
July 21, 2022 4:52 pm

Volcanic activity has been increasing for the past 200 years. You have cherry picked data to obscure that trend.

Global Volcanism Program | Has volcanic activity been increasing?

Matt G
Reply to  Steve Case
July 21, 2022 12:39 pm

There were some observations in the Arctic, some areas of the Northern Hemisphere, Austrialia and New Zealand that showed cooling. The problem being with the GISS the coverage was very poor between 1880 and 1910 over the entire ocean, Africa, Middle East, South America, Antarctica and anywhere Mexico South in North America.

https://data.giss.nasa.gov/gistemp/station_data_v4_globe/

Richard M
July 21, 2022 10:26 am

Fact checkers seem to always leave out important … facts.

What about the PDO shift in 2014? It completely explains the warming that led to the “rapid temperature rise”. Since then there’s been a 7 year cooling trend of nearly 0.2 C / decade. We are already returning back to the situation prior to 2014 and with the AMO about ready to change into its cool phase, the cooling will only accelerate.

comment image

ResourceGuy
Reply to  Richard M
July 21, 2022 10:51 am

Careful, you might wake up even more people than the initial hockey stick sceptics and independent thinkers with these details and factors. That could case scrutiny from Soros-funded DAs.

Reply to  Richard M
July 22, 2022 7:18 am

fact checkers are fact chokers

Matt G
July 21, 2022 10:28 am

The only difference between the two pauses has been the strong El Nino in 2015 and data manipulation of global temperature sets like GISS, HADCRUT4 and RSS trying to hide the decline.
https://www.woodfortrees.org/plot/uah6/from:1998/to:2015/plot/uah6/from:2016/plot/uah6-land/from:1998/to:2015/trend/plot/uah6/from:2016/trend

July 21, 2022 10:28 am

It might not be a pause…it might be topping out of the Modern Warming Period and a beginning of a return to Little Ice Age type climate….climate changes…none of us alive today will see the next bottom…..call it the Modern Cool Period beginning soon.

July 21, 2022 10:37 am

Monckton is a data mining fraud.
He hurts the cause of climate realists.
The UAH data begin in 1979.
He truncates data he does not like
to create meaningless short term trends.

“The (UAH) linear warming trend since January, 1979 still stands at +0.13 C/decade (+0.11 C/decade over the global-averaged oceans, and +0.18 C/decade over global-averaged land).”

Monckton cherry picks short term trends because he is biased.

There have been many short term flat trends since the latest
global warming trend began during the 1690s (coldest decade
of the Maunder Minimum period)

There was even a significant global cooling trend from 1940 to 1975,
which has since been “revised away”

Not one of those short term flat trends or the 1940 to 1975 global cooling
had an ability to predict the climate changes that followed. Global warming
continued even after the 1940 to 1975 cooling trend. Those trends were
meaningless variations within a longer term warming trend.

Richard Page
Reply to  Richard Greene
July 21, 2022 10:48 am

If you’re going to attack someone on their work, provide proof of your position.

Reply to  Richard Page
July 21, 2022 11:02 am

Monckton has the whole 42 year UAH record available.
42 years is climate.
He cherry picks only the years
he wants to make a meaningless claim.
8 years is weather.
The Monckton starting year of 2015 includes the unusually large
2015 / 2016 El Nino heat release, unrelated to greenhouse gases. Choosing that starting year is also biased,

Dan
Reply to  Richard Greene
July 21, 2022 11:30 am

Even the 42 years of the UAH is not enough to establish long term trends. First, we know that from the mid 1950’s to late 1970’s we were in a cooler period, but don’t have the data available in the satellite record. Second, there were years in the 1980’s and early 1990’s in which global temperatures were depressed by stratospheric volcanic eruptions, which has a non-CO2 related effect on the per per decade UAH rise. And the 1930’s- mid 1950’s were nearly if not as warm as today.
These variable conditions may be the result of longer term trends influenced by the PMO and AMO. But the issue is that our lack of a complete global dataset more than 42 years old hinders our ability to understand what, if any, contribution CO2 is adding to the recent rise in warming over and above natural cycles. Therefore, anyone who says that the science is settled is either ignorant or lying.

Reply to  Dan
July 21, 2022 11:54 am

The pre-1979 surface data can’t be trusted.
In 1975 NCAR reported significant global cooling from 1940 to 1975 that has since been “revised away” with no explanation. It’s true the global average temperature accuracy is questionable before 1979and the i use of weather satellites.

But your statement about that is wrong.

You wrote: “The issue is that our lack of a complete global dataset more than 42 years old hinders our ability to understand what, if any, contribution CO2 is adding to the recent rise in warming over and above natural cycles.”

We could have an accurate data set for 100 years and we STILL would not know the exact effect of CO2. There are too many climate change variables to know exactly what CO2 does. To know exactly what CO2 does, you would have to know what every other climate change variable does. We don’t know that, and are not even close to knowing that.

The warming since 1975 could be 100% natural or 100% CO2 but both would be a wild guess, and probably very unlikely. The rate of warming since 1975 suggests 100% natural causes are unlikely.

CO2 is a greenhouse gas. Adding CO2 to the atmosphere should impede Earth’s ability to cool itself by some amount. There is no evidence that amount has been harmful in any way. AGW is a reasonable assumption. But CAGW us an unreasonable prediction.

Reply to  Richard Greene
July 21, 2022 12:54 pm

It’s rather frustrating that alarmists continue to insist that since their assessment of what data can and cannot be trusted begins at the end of the strongest cooling trend in recent history, essentially climate begins at that moment.

It’s a lot like watching a game of hockey starting late in the third period and analyzing teams based on how they behave when the score is particularly one-sided and time is running out.

Of course, when you then extrapolate based on that tiny amount of biased information, you’re going to get a hockey stick.

The data over that short a time period doesn’t suggest much of anything. Especially since when you use your conclusions to go back in time it’s a very poor model for past temperature and requires gymnastics like flatting out the Little Ice Age and the Medieval Warm Period.

And even then, it’s not enough. You have the NOAA altering past temperature record because the trend in the 1930s thoroughly breaks the loose association between CO2 and temperature. And even then, it’s not enough unless you program “tipping points” into models.

Why assume changing CO2 drives anything? It’s just one of millions of variables, and after 30 years of desperately trying to make this tail wag the climate dog, it still requires charlatans like Mann committing malpractice on the scientific method.

Reply to  Joe Gordon
July 21, 2022 4:56 pm

‘It’s rather frustrating that alarmists continue to insist that since their assessment of what data can and cannot be trusted begins at the end of the strongest cooling trend”

It’s worse than that.
Alarmists predict a future global warming rate 2x faster than the 1975 to 2020 period. And they never mention they made the same predictions from 1975 to 2020, and were wrong for the entire 45 years!

Reply to  Richard Greene
July 21, 2022 1:10 pm

We could have an accurate data set for 100 years and we STILL would not know the exact effect of CO2. There are too many climate change variables to know exactly what CO2 does. “

Exactly! Which is why short term trends *must* be considered. They are not meaningless. They are indicators of something occurring that long term trends do not adequately address.

The rate of warming since 1975 suggests 100% natural causes are unlikely.”

Because of uncertainty in the measurements, including satellite measurements, how do we *really* know what the rate of warming actually is? The uncertainty interval totally masks the entire area the supposed measurements exist in. How do you determine a 0.13C difference when the uncertainty is more than +/- 0.5C?

Averaging does *NOT* increase accuracy nor does it lessen uncertainty, not when you are combining multiple measurements of different things. And that is what temperature measurements are, multiple measurements of different things. There is no guarantee they will generate a normal curve which can be described by the usual statistical descriptions of standard deviation and average. Temperature data should be described using the 5-number description: minimum, first quartile, median, third quartile, and maximum. Why are climate scientists, especially CAGW advocates, so reticent about doing this?

Reply to  Tim Gorman
July 21, 2022 2:01 pm

Exactly! Which is why short term trends *must* be considered.

I’ll ask again, and not expect an answer, why do you consider the last 8 years to be a short term trend that needs to be considered and not the last 10 or 12 years?

Reply to  Bellman
July 21, 2022 3:12 pm

They *ALL* need to be considered! When doing forecasts, however, the further back you go the less weight individual data values should have. The past eight years should be weighted heavier than years 10, 11, and 12.

It’s EXACTLY like forecasting weather! What happens today in weather is a far better predictor of what is going to happen tomorrow than the weather from 10, 11, or 12 days ago.

Do *YOU* believe the average weather of the past 30 days is a better predictor of the weather tomorrow than the weather you are experiencing today?

Why wouldn’t what happened this year be a better predictor of next year than a year 12yrs in the past? 20 yrs in the past? 40yrs in the past?

Your claim seems to be that what happened 40yrs ago is just as predictive of next year as what happened this year.

Forgive me but that belief is a religious one, not a scientific one.

Reply to  Tim Gorman
July 21, 2022 4:47 pm

“They *ALL* need to be considered!”

So why do I never see anyone here doing that? The only short term trends that are considered noteworthy are those showing a negative trend. If more recent term trends are considered more important, why did Monckton spend so much time going on about an 18 year “pause” whilst ignoring the more recent 7 or 8 years showing an accelerated rate of warming?

“It’s EXACTLY like forecasting weather!”

We are not trying to forecast anything here, we are trying to establish what the current trend us and if it has changed.

“Do *YOU* believe the average weather of the past 30 days is a better predictor of the weather tomorrow than the weather you are experiencing today?”

No I don’t. On the other hand the weather of the past 30 years is likely to be a better indicator of the weather in a few years time than the weather yesterday.

“Your claim seems to be that what happened 40yrs ago is just as predictive of next year as what happened this year.”

My claim is that looking at all the data over the last 40 years is a better indicator if what’s currently happening than the last few years, and that before you assume it’s not you should show a statistically significant indication there has been a change.

“Forgive me but that belief is a religious one, not a scientific one.”

Yes, wanting to see evidence for something is the religious approach.

Reply to  Bellman
July 21, 2022 4:54 pm

more recent 7 or 8 years showing an accelerated rate of warming?”

Accelerated compared to what?

See attached. I don’t see any acceleration in the rate of warming!

uah_monthly_anomaly.png
Reply to  Tim Gorman
July 21, 2022 5:43 pm

Accelerated compared with the previous warming. e.g. 0.26°C / decade compared with 0.13°C / decade.

Of course you don’t see any acceleration on that graph. That’s because a) there isn’t any, and b) you are not cherry-picking the most recent years at any point.

For example here is the trend from December 2007 to October 2015. Trend is 0.28°C / decade, more than twice the overall trend. By your logic this should have been of more interest than the 18 year pause.

Reply to  Bellman
July 21, 2022 5:44 pm

Here’s the graph:

20220721wuwt.png
Reply to  Bellman
July 22, 2022 3:32 am

Accelerated compared with the previous warming. e.g. 0.26°C / decade compared with 0.13°C / decade.

OK, let’s look at all of the “decadal (120-month trailing) trends” available from the UAH dataset, and how they have evolved (/ are evolving ?) over time.

Following the 1997/9 “massive / Godzilla” El Nino the UAH decadal trends show a jump from a (roughly) zero value to a double-peak (n 1998 and 2001/2), followed by a decline to a minimum at the beginning of 2012.

Following the 2015/6 “massive / Godzilla” El Nino the UAH decadal trends show a double-peak (in 2017 and 2020), and are currently “trending” lower …

Will the decadal trend continue down for the next 7 or 8 years ?

Nobody “knows” for certain.

UAH_Decadal-trends_To-June-2022.png
Reply to  Mark BLR
July 22, 2022 5:19 am

OK.

Your own graph shows that the 10 year trend up to 2021, the time Monckton starts his new pause analysis, was the highest it’s ever been, over 0.5°C / decade. It’s now down to around 0.2°C / decade, faster than the underlying UAH trend, faster than most data sets over the last 40+ years.

None of this means the rate have warming has accelerated. All the graph really shows is that 10 year trends are very uncertain, and any conclusion drawn from any randomly chosen 10 year period is likely to be very wrong.

Reply to  Bellman
July 22, 2022 9:59 am

None of this means the rate [of] warming has accelerated.

If a trend [ = “a rate of warming” ] increases in magnitude then it is, by definition, “accelerating”.

If it reduces in magnitude then it is, by definition, “de-celerating”.

Going from a trend of 0.5°C/decade to 0.2°C/decade is a de-celeration of 60%

10 year trends are very uncertain …

Yes they are, which is why the default integration time for something to be considered as “climate” is 30 years.

I still consider my graph to be “intriguing” though, given “monster” El Ninos (1982/3, 1997/8, 2015/6) happen every 15 to 18 years or so …

Reply to  Mark BLR
July 22, 2022 3:15 pm

According to your graph we are already below 1991/1992 temps.

That’s a 30 year spread, not a 10 year spread.

What it really shows is the cyclic factors in the temperatures. Right now it appears we are on the cooling side of a cycle. The question is where it will end.

Reply to  Tim Gorman
July 23, 2022 5:43 am

According to your graph we are already below 1991/1992 temps.

1) “My graph” is of (10-year / 120-month) trends, not “temps”.

2) UAH, like RSS and STAR, doesn’t provide (global mean) “temperatures”, but “temperature anomalies” at various altitude (/ pressure level) bands derived from satellite MSU data.

Hopefully the following graph will make the difference clearer (with trends in °C per decade so they fit on the same Y-axis number scale).

UAH_Anoms-trends_To-June-2022.png
Reply to  Bellman
July 22, 2022 7:18 am

Temperature anomalies are crap from the start to the finish. They are derived from bad data using bad statistical processes.

You can post anomalies all day long, such as the global average temperature, and I’ll show you how they are crap. I’ve done it multiple times with you and you just refuse to listen, instead just relying on religious dogma to carry you through!

Reply to  Tim Gorman
July 22, 2022 8:40 am

Temperature anomalies are crap from the start to the finish.

Yet you use them to claim there’s been a change in the trend with no uncertainty.

Seriously, show so consistency. If there is no record that shows what’s actually happening to global temperatures, then all your claims that there is a pause, or that you can prove CO2 is not correlated with temperature, must be pure speculation.

Uncertainty is not going to help you. If we don;t know for sure temperatures are rising, we also don’t know that they are not rising at a much faster rate.

Reply to  Bellman
July 23, 2022 3:43 am

Yet you use them to claim there’s been a change in the trend with no uncertainty.”

I am criticizing what is out there. I have stated several times just in this subthread that I have little confidence in *any* of the so-called temperature data, especially the derived anomalies.

“If there is no record that shows what’s actually happening to global temperatures, then all your claims that there is a pause, or that you can prove CO2 is not correlated with temperature, must be pure speculation.”

I have been especially consistent in criticizing *all* of the so-called temperature constructions. I guess you just don’t bother to read or you have a failing memory. That does *NOT* mean I can’t comment on what is out there. *I* am not the one that is saying the 40 year trend line is what we should depend on for creating expectations for the near-term future. That is *YOU*!

“Uncertainty is not going to help you. If we don;t know for sure temperatures are rising, we also don’t know that they are not rising at a much faster rate.”

Which is *EXACTLY* what I have been saying. We don’t know for sure if the trend slope is negative, positive, or zero. I’ve used that sentence multiple times with you!

Are you finally coming around to agreeing with me? Or are you going to continue assuming that measurement uncertainty is irrlevant and can be ignored?

Reply to  Tim Gorman
July 23, 2022 2:15 pm

I have been especially consistent in criticizing *all* of the so-called temperature constructions. I guess you just don’t bother to read or you have a failing memory. That does *NOT* mean I can’t comment on what is out there.”

Granted my memory isn’t that good and not likely to get better at my age. But all I’m trying to get you to acknowledge is the inconsistency in you claiming that all temperature constructions are useless, yet also claiming you know with certainty that the trend has changed over the last 8 years, and claiming this trend shows all models are wrong and CO2 cannot be causing warming.

*I* am not the one that is saying the 40 year trend line is what we should depend on for creating expectations for the near-term future. That is *YOU*!

Maybe you need to worry about your own memory. You keep remembering things I haven’t said. All I’ve said is that a 40 year trend is likely to be a better predictor than an 8 year trend. I certainly don’t think you should depend on it.

We don’t know for sure if the trend slope is negative, positive, or zero. I’ve used that sentence multiple times with you!

But you insist we shouldn’t ignore it because short term trends often become long term ones. Do you see the problem? How can we concentrate on this short trend if we don’t know whether it’s warming or cooling much faster than before.

Are you finally coming around to agreeing with me? Or are you going to continue assuming that measurement uncertainty is irrlevant and can be ignored?

I keep telling you my thoughts on the subject, but you keep forgetting.

Reply to  Bellman
July 21, 2022 5:06 pm

We are not trying to forecast anything here, we are trying to establish what the current trend us and if it has changed.”

The current trend has *certainly* changed over the past eight years. You just don’t want to admit it.

Of what use is a trend line if you aren’t going to use it as a predictor of the future? What does knowing what happened in the past 40 or 50 years tell you? That it warmed over that period? So what? Does that mean it’s going to continue to warm? Or does it mean you simply don’t know what it’s going to do in the future?

If you don’t know what it’s going to do in the future then it’s all just mental masturbation with no purpose.

“No I don’t. On the other hand the weather of the past 30 years is likely to be a better indicator of the weather in a few years time than the weather yesterday.”

ROFL! So if it is sunny and hot today you think it is not likely to be sunny and hot tomorrow if 30 yrs ago it rained?

Why is the weather of 30yrs ago a better predictor of next year than this year? On what do you base that assumption? Tradition?

Bottom line? You are as bad at forecasting as you are at handling uncertainties.

My claim is that looking at all the data over the last 40 years is a better indicator if what’s currently happening than the last few years”

You would make a *terrible* farmer where you must base your choice of crop and planting time based on what happened over the last few years rather than what happened 40 years ago!

You are bad, bad, bad at forecasting.

The same thing would apply to almost anything one can think of – sizing growth investment in infrastructure, writing long term contracts for supplies, determining highway investments, etc.

“you should show a statistically significant indication there has been a change.”

Exactly what Monckton has done and which you can only denigrate because you are jealous of what he has done.

Reply to  Tim Gorman
July 21, 2022 6:09 pm

The current trend has *certainly* changed over the past eight years.

Only if you ignore all uncertainties. You only want to use the stated values of the trend. But in that case, the trend is always changing. If you look at all 8 year trends over the UAH data, there are some warming at over 0.5°C / decade, and some cooling by 0.3°C / decade.

Of what use is a trend line if you aren’t going to use it as a predictor of the future?

To tell you what has been happening.

What does knowing what happened in the past 40 or 50 years tell you?

It tells you it’s been warming. It allows you to investigate reasons for that warming.

Does that mean it’s going to continue to warm?

You’re the one who keeps insisting the current trend needs to be considered a possible indication of what’s going to happen.

If you have no other information it may be the best assumption that a trend will continue into the future, but you don’t want to project that too far.

But it’s far better to understand what’s happening and try to use that information for future predictions.

ROFL! So if it is sunny and hot today you think it is not likely to be sunny and hot tomorrow if 30 yrs ago it rained?

No. If it’s unusually sunny and hot today, I’m not going to assume that it will continue to be unusually sunny and hot for the next 5 years. It’s reasonable to assume that if it’s rained on occasions over the last 30 years, that it will rain at some point in the future.

It was 40°C here a couple of days ago. I know that’s far from the average, so I’m not going to assume that tomorrow will also be 40.

You would make a *terrible* farmer where you must base your choice of crop and planting time based on what happened over the last few years rather than what happened 40 years ago!

If I were a farmer I’d listen to what the weather forecasts were rather than assume the weather this year was going to the same as it was last year.

You are bad, bad, bad at forecasting.”

Give one example of a forecast I made that has been proven wrong.

“The same thing would apply to almost anything one can think of”

To be clear, when I say look at long term statistics, I’m also including trends etc. I’m not saying the weather next year is likely to be the same as it was 30 years ago, because we know there’s been 30 years of warming since then. I’m just saying don’t assume that just because this summer was unusually hot means next summer will also be unusually hot.

Exactly what Monckton has done…”

I’ve never seen Monckton give any evidence of a statistically significant change in trend. If you know different than show me where he does. I’ve looked at the data enough to satisfy myself that no such evidence exists, but I’m always willing to look at any new evidence.

Reply to  Bellman
July 22, 2022 1:53 pm

Only if you ignore all uncertainties. “

If you consider uncertainties then the 40 year trend is just as meaningless as the 8 year trend. So are you saying you can’t use the 40 year trend either?

“You only want to use the stated values of the trend.”

Malarky! *I* am the one that informed you that your “uncertainty of the trend” was not an uncertainty at all – it is only a best-fit metric between the trend line and the stated values with no regard to the uncertainty associated with the stated values!

If the uncertainties of the stated values are so bad over the past eight years that the trend is unusable then so is the trend line for the past 40 years!

To tell you what has been happening.”

What use is that? Are you going to use it to guess the future? It’s already the past. You lived through it. Apparently it didn’t cause the elimination of the human race.

“It tells you it’s been warming. It allows you to investigate reasons for that warming.”

Does that mean it is always going to warm? It’s pretty obvious that you think so.

It’s been 40 years with CAGW advocates creating and running climate models – and we are no closer to knowing the reasons for why the atmosphere acts as it does. The models all show the warming going up with the same slope forever – just like your 40 year trend. Apparently we are never going to have another ice age according to the models.

The 18 year pause and the current 8 year pause *should* be clues to honest researchers that the models aren’t working correctly since they don’t reproduce them at all. Calling the pauses “noise” is just the argumentative fallacy of Argument by Dismissal.

Reply to  Tim Gorman
July 22, 2022 5:04 pm

If you consider uncertainties then the 40 year trend is just as meaningless as the 8 year trend.

Only if you don’t understand how uncertainties in a trend work. Which of course, you don’t and I’m sure you are about to illustrate that again.

Malarky! *I* am the one that informed you that your “uncertainty of the trend” was not an uncertainty at all

You don’t inform someone by spouting nonsense. You insist that there is no uncertainty in a trend line, because “it is just a best fit metric”. Hence you want to rely only on the stated value of the trend line. That’s your only justification for saying the trend has certainly changed 8 years ago.

If the uncertainties of the stated values are so bad over the past eight years that the trend is unusable then so is the trend line for the past 40 years!

Pointless explaining this again. But here goes. The uncertainty in the trend line are not based, normally, on supposed uncertainty in the measurements, but on the variation in the data. This variation can come from different sources, including measurement error, but is mainly due to other factors not related to the independent variable (time in this case).

The less data you have, the more this variability contributes to the uncertainty of the trend. Hence the uncertainty over an 8 year period is much greater than over a 40 year period.

Gorman wants to ignore all this uncertainty and only base the uncertainty on the measurement uncertainty for each month. That’s fine, and you can include it. But He also believes the uncertainty in UAH data is huge, much larger than all the variation observed in the data over the last 40 years. And he thinks the best way to determine the actual uncertainty in the trend line is to assume it’s possible for the trend to go from the coldest possible value for the first month to the warmest possible value for the last month, or vice verse.

Hence he thinks the uncertainty over 40 years could be 2.8°C divided by the time frame. So this would be around 0.13 ± 0.7°C / decade. Whilst over the last 8 years it would be 0 ± 3.5°C / decade.

For some reason he doesn’t think this invalidates the claim that the pause proves CO2 does not effect temperature, or that there has certainly been a change in trend over the last 8 years.

Reply to  Tim Gorman
July 22, 2022 5:18 pm

What use is that? Are you going to use it to guess the future? It’s already the past. You lived through it. Apparently it didn’t cause the elimination of the human race.

Does that mean it is always going to warm? It’s pretty obvious that you think so.

So many straw men.

Once again I am not trying to predict the future. I want to know if temperatures have been warming, past tense. I’d like to know if that warming stopped 8 years ago.

One reason for asking these questions might be to see what this would mean if the trend continued into the future, but it is not a good way of predicting the future.

I absolutely do not think it is always going to warm. If, as I thin is likely, warming is being controlled by CO2, then warming will stop at some point after CO2 stops rising.It might also stop if other hypothetical cooling events occur, e.g. the earth is hit by a major asteroid, there’s a super volcano, or the sun was to go cold.

Linear regression can only really tell you about the trend within the scope of the data. Extrapolating far beyond that is dangerous for many reasons.

I do not believe it’s likely that global warming will result in the extinction of the human race. But arguing that it hasn’t happened yet so cannot happen no matter how warm it gets, is another of your fallacious arguments.

Reply to  Bellman
July 22, 2022 2:15 pm

You’re the one who keeps insisting the current trend needs to be considered a possible indication of what’s going to happen.”

They *are* a possible indication of what could happen! Otherwise the pauses would not have happened at all! Even better they are indicators that the models are wrong since the models don’t reproduce the pauses!

If you have no other information it may be the best assumption that a trend will continue into the future, but you don’t want to project that too far.”

Ahhhh! So now you are changing your tune, eh? It’s always temporary with you. You will soon enough fall back into calling the pauses as discontinuities, statistically insignificant, and cherry-picked data.

“But it’s far better to understand what’s happening and try to use that information for future predictions.”

Of course! And Monckton is pointing out that the models don’t understand what is happening since they don’t reproduce the pauses in the face of ever increasing CO2!

“No. If it’s unusually sunny and hot today, I’m not going to assume that it will continue to be unusually sunny and hot for the next 5 years.”

How about for 18 years?

“It’s reasonable to assume that if it’s rained on occasions over the last 30 years, that it will rain at some point in the future.”

And if you can’t tell from your model when that rain is going to happen then what use is the model?

“If I were a farmer I’d listen to what the weather forecasts were rather than assume the weather this year was going to the same as it was last year.”

The Old Farmers Almanac has provided better long range forecasts for the next year than the climate models! The OFA claims an 80% average accuracy with its long range monthly forecasts. Since they publish a past year review on their accuracy each year, if they were fudging the numbers they’d get caught out pretty quickly.

The OFA does *NOT* use a 40 year trend line to forecast the next years weather!

“Give one example of a forecast I made that has been proven wrong.”

You keep claiming that Monckton’s pause is unusable, statistically insignificant, causes discontinuities, etc. All to try and dissuade people from using the current pause to say that it might continue next year. You’ve said this for over a year. But we just keep seeing the pause continue every month regardless of your claim that it can’t because the 40 year trend line says it can’t.

I’m not saying the weather next year is likely to be the same as it was 30 years ago, because we know there’s been 30 years of warming since then.”

Really? It sure sounds like that is what you’ve been saying! It sure sounds like you are now trying to change your tune! But then, just like with uncertainty, you revert right back to saying that next year will be just like the past 30 years! That’s based on your sentence “I’ve never seen Monckton give any evidence of a statistically significant change in trend.” In other words you *are* saying that the weather next year will be what the 40 year trend line says and not what the pause trend says.

You want your cake and to eat it too. It just doesn’t work that way!

Reply to  Tim Gorman
July 22, 2022 4:40 pm

They *are* a possible indication of what could happen!

It’s possible they could be the start of a new long term trend. And it’s possible the trend since the start of 2011, 0.33°C / decade will continue into the distant future. Anything’s possible, but with statistics I’d prefer to be skeptical and wait for clear evidence before basing assumptions on slim possibilities.

Ahhhh! So now you are changing your tune, eh?

And I see we are in the another long session of strawman arguments. I say that the best indicator of future trends, if you have no other information, might be the trend over the last 40 years. I’ve no idea why you think that disagrees with anything else I’ve said.

How about for 18 years?

Obviously if I think it’s unlikely the current weather conditions will last for the next 5 years, I doubt they will last for the next 18. What is your point?

And if you can’t tell from your model when that rain is going to happen then what use is the model?

By model, you mean looking at weather over the last 30 years as an indicator of the range of likely weather for the near future?

The point isn’t to predict when it will rain, the point is to know that rain is possible at times. That’s all I’m claiming, along with the idea that this is better than just using the weather from the last 8 years as a model.

The Old Farmers Almanac has provided better long range forecasts for the next year than the climate models!

I’m sure you’ve got lots of anecdotal evidence to back that up.

The OFA claims an 80% average accuracy with its long range monthly forecasts.

Piers Corbyn claims 85% accuracy. It doesn’t make it true.

You keep claiming that Monckton’s pause is unusable, statistically insignificant, causes discontinuities, etc

That is not a forecast.

I might have jokingly said in 2016 that knowing Monckton he would be claiming that’s the start of a new pause in a few years time. That would have been a forecast.

All to try and dissuade people from using the current pause to say that it might continue next year. You’ve said this for over a year.

An absolute lie. I’ve never said the new pause will end next year. I’m pretty sure it will continue for a number of years yet. It really just depends when the net big El Niño arrives.

Really? It sure sounds like that is what you’ve been saying!

It might sound like it to you, because you’ve got a weird habit of ignoring or misunderstanding everything I say.

But then, just like with uncertainty, you revert right back to saying that next year will be just like the past 30 years! That’s based on your sentence “I’ve never seen Monckton give any evidence of a statistically significant change in trend.” In other words you *are* saying that the weather next year will be what the 40 year trend line says and not what the pause trend says.

See what I mean?

How on earth does “I’ve never seen Monckton give any evidence of a statistically significant change in trend.”, translate in your mind to me saying “the weather next year will be what the 40 year trend line says”?

Reply to  Bellman
July 21, 2022 7:19 pm

When doing forecasts, however, the further back you go the less weight individual data values should have.

Do you have a particular weighting you want to use? I’ve tried various ones, and they generally increase the trend slightly. Here for example, I’ve effectively reduced the weight by 10% per year going back from present. The trend increases to 0.15°C / decade.

(Blue line is the unweighted trend, red the weighted one.)

20220721wuwt2.png
Reply to  Bellman
July 22, 2022 2:52 pm

I don’t think you understand how weighting works in such a situation. Take the leftmost value and assign a weight of 1 to it, so that value only appears once. Take the second value and assign an incrementally increased weight to it, e.g. a 2, so it appears twice. Follow through with all the data values so that the rightmost value appears the same number of times as the number of data values in the set. If you have 40 years of monthly data then your last data entry will appear 480 times while the first data point will only appear once. The second to last data point will appear 479 times. Your trend line *should* bend to minimize the residuals between it and the 480 appearances of the last data point and the 479 appearances of the next to last data point and etc. Your x-axis is no longer time but order of appearance.

Use whatever weighting algorithm you want but it *has* to give more weight to current data than past data. And that does *NOT* mean just multiplying the data by some value.

Reply to  Tim Gorman
July 22, 2022 4:00 pm

Could you provide a reference to this technique?

The usual way I’ve seen to add weighting is to provide a weight to the cost of each residual.

Reply to  Bellman
July 24, 2022 8:45 am

Could you provide a reference to this technique?”

If I can find any old studies we did for sizing central office growth additions I’ll forward them to you.

you might look here: https://atlasprecon.com/weighted-average-forecasting/

It will give some hints on how to do this. I’m not your research assistant. If you want to know more on how to do it then do your own research.

Reply to  Tim Gorman
July 24, 2022 1:29 pm

I’m not your research assistant.”

Yet you seem to think I’m yours. You keep making vague claims without ever doing the work to show they are true. You insist that a weighted trend will demonstrate the pause, but then when I do the work and show it doesn’t, you claim I’m not doing it correctly, but still won’t justify the way you want it done.

I don’t know if you’ve noticed, but I’ve now done it exactly as you described, and it still shows an accelerated warming trend, not a pause.

The link you provided is not about a weighted regressions, it’s about weighted averages.

Reply to  Bellman
July 24, 2022 4:13 pm

Yet you seem to think I’m yours.”

Really? I am the *ONLY* one of us two that have worked out the examples throughout Taylor’s book on uncertainty. I’ve even obtained multiple statistics books and given you their reference and exact quotes. I have gone back through my forecasting notes and given you the proper method for weighting past data and given you a link to show how it is done!

And you just keep quoting religious dogma that the best-fit metric is a measure of uncertainty, that all uncertainty in all populations cancel and the accuracy of the mean is the standard deviation of the sample means.

You did *NOT* do the weighting correctly and you probably never will. It’s just too hard for you to understand even with my simple explanation of how to do it.

Nor can you get it into your head that you cannot just analyze physical phenomena by using linear regression based on the earliest start value and the latest end value. You HAVE to look at the pieces of the data along the way if you are trying to isolate physical phenomena. That’s why Monckton does what he does.

I’ve attached Abbott’s graph again. Maybe you’ll open your eyes and see it this time. It distinctly shows cyclical processes which your trend line does not and can not show. So you just remain stuck in your rut.

abbott_cyclical_temp.jpg
Reply to  Tim Gorman
July 24, 2022 4:59 pm

Really? I am the *ONLY* one of us two that have worked out the examples throughout Taylor’s book on uncertainty.

Congratulations. Have a gold star.

I’ve even obtained multiple statistics books and given you their reference and exact quotes.

I’m sure you believe that this has some baring on anything.

I have gone back through my forecasting notes and given you the proper method for weighting past data and given you a link to show how it is done!”

Your memory is playing tricks again. Your link said nothing about a weighted regressions, as I told you.

But again, you keep giving me these tasks. You don’t do the work yourself. You show me what your weighted regression looks like. Don’t expect me to do it for you then complain if you don’t like the result.

And you just keep quoting religious dogma that the best-fit metric is a measure of uncertainty, that all uncertainty in all populations cancel and the accuracy of the mean is the standard deviation of the sample means.

All things I’ve never said. But keep arguing with the voices in your head. It’s amusing in a grim sort of way.

You did *NOT* do the weighting correctly and you probably never will.

Then you do it.

It’s just too hard for you to understand even with my simple explanation of how to do it.

Really? What do you think I did wrong? As far as I can tell I followed your explanation to the letter. Maybe if you show me your workings we can see who’s done it more correctly.

I’ve attached Abbott’s graph again. Maybe you’ll open your eyes and see it this time.

Look at the comment I was responding to. There was no graph.

It distinctly shows cyclical processes which your trend line does not and can not show.

And has absolutely nothing to do with what we were talking about, or what you were claiming.

You said:

As you can see the trend you are looking at reversed around 2012. The end of this graph is around where the 2016 El Nino started so you have a spike at the end but the cooling trend has continued since then.

Your graph is showing a trend from 1975 – 2009. I was talking about the trend over the length of your two overlapping pauses – 1997 – 2022. I’m talking about UAH data, you are using some old HadCRUT data. You claim it stops in 2016, it stops around 2010.

Reply to  Bellman
July 25, 2022 6:13 am

Your memory is playing tricks again. Your link said nothing about a weighted regressions, as I told you.”

You are the only one with a failing memory. I laid out how you do it in excruciating detail. I even gave you a link: https://atlasprecon.com/weighted-average-forecasting/
but you apparently didn’t go look at it or try to figure it out.

It’s a total waste of time trying to lead you to water, you just refuse to drink – like an old, stubborn mule!

Then you do it.”

I’ve shown you how to do it. I’ve given you a link on the process. *YOU* are the one that needs to learn how to do it, not me. YOU need to do it as a learning exercise. Me doing it won’t teach you anything! I’ve attached two very simple graphs showing how I’ve done a very simple exercise. You use a polynomial fit to see how the slope of the trend line changes as you weight the most recent data more heavily vs a straight linear regression line of the data.

Really? What do you think I did wrong? As far as I can tell I followed your explanation to the letter. Maybe if you show me your workings we can see who’s done it more correctly.”

You didn’t follow it at ALL! You don’t just multiply the data values, that changes the actual data without weighting it at all. As usual, you just don’t get it!

“And has absolutely nothing to do with what we were talking about, or what you were claiming.”

Of course it does! Stop whining. It shows you simply cannot just depend on a long term linear regression to determine what is actually happening! Something you just refuse to understand. There are none so blind as those who will not see!

weighted_vs_linear.jpg
Reply to  Tim Gorman
July 25, 2022 9:43 am

You are the only one with a failing memory. I laid out how you do it in excruciating detail.

I was specifically talking about the link. The one you repeat, claiming it explains how to do the weighted regression. IT does not. Maybe you posted the wrong link twice, but the one you posted is about “weighted average forecasting”. It says nothing about regression. All it’s doing is calculating a weighted average by month and using that as a forecast.

As to your excruciating description, I followed it gave you two graphs. They didn’t give you the result you wanted, so you insist I’d done them wrong, but still refuse to show me what they should look like.

Me doing it won’t teach you anything!

I’m not asking you to teach me anything, and if I was I’d be asking for my money back. What I’m asking is for you to justify your claims.

You use a polynomial fit to see how the slope of the trend line changes as you weight the most recent data more heavily vs a straight linear regression line of the data.

Pity your “excruciating detail” failed to mention you want a polynomial. So what order of polynomial does your method require?

As usual, you just don’t get it!

Then show me how you did it. Show me your result.

There are none so blind as those who will not see!

There are none so empty as those who are full of themselves.

Reply to  Bellman
July 25, 2022 12:11 pm

I was specifically talking about the link. The one you repeat, claiming it explains how to do the weighted regression”

Wow! Go take your medicine! The entire issue was how you extend past data into the future – FORECASTING. *YOU* want to extend the linear regression line by saying that anything other than the linear regression line is somehow wrong!

I’ve told you MULTIPLE TIMES that such a view is nothing more than the old argumentative fallacy of Appeal to Tradition. When forecasting the *worst* thing you can do is extend a long term linear regression line because that gives equal weight to data in the far past compared to recent data. The link I gave you gives a base for weighting data to make it more applicable to a forecast!

You *REALLY* don’t remember me telling you this? You are going to make me go back through the thread to find it?

“Pity your “excruciating detail” failed to mention you want a polynomial. So what order of polynomial does your method require?”

What did you expect a weighted data graph to give you? Another linear regression line with a different slope? That’s what you get when you generate a new set of data by just multiplying existing data by some factor! If you multiply the new data by a larger number or the old data by a smaller number exactly what did you expect to get besides another linear regression line with a larger slope?

That simply isn’t how you forecast from past data!

What difference does it make as to what order of polynomial it is? Create the data yourself and do a fit. It’s the only way you are going to learn! It’s pretty obvious what data values I used just from the graph – 1,2,3,4,5,6,7,8,9,10. And how to do it is obvious in the link I gave you.

It is *really* frustrating discussing anything with you because you know so little about the subject but are so willing to dismiss everything out of hand if it doesn’t fit your narrow point of view of how things ought to be!

Then show me how you did it. Show me your result.”

I DID SHOW YOU! The graph is so simple a 6th grader could figure it out!

There are none so empty as those who are full of themselves.”

You have so blinded yourself you can’t even figure out a simple graph based on 10 numbers. And you are criticizing others?

Reply to  Tim Gorman
July 25, 2022 2:21 pm

Wow! Go take your medicine! The entire issue was how you extend past data into the future…

These attempts at distraction are getting feeble. You know full well, and if you don’t you can reread the comments, what I was talking about. You gave me a link you keep, claiming it answered the question of how you wanted me to do a weighted linear regression, and when I point out it’s about a weighted average forecast you claim that’s what you meant all along.

*YOU* want to extend the linear regression line by saying that anything other than the linear regression line is somehow wrong!

Stop lying. I keep telling you I don’t want to do that.

I’ve told you MULTIPLE TIMES that such a view is nothing more than the old argumentative fallacy of Appeal to Tradition.

Yet you have no problem with using the argumentative fallacy of a strawman.

What did you expect a weighted data graph to give you?

Stop playing games, and say exactly what you want me to do, or better yet do it yourself. I’m sure if you fiddle about with the data enough you can get something that looks like a pause.

What difference does it make as to what order of polynomial it is?

Seriously?

Create the data yourself and do a fit.

I did and posted the result.

It is *really* frustrating discussing anything with you because you know so little about the subject but are so willing to dismiss everything out of hand if it doesn’t fit your narrow point of view of how things ought to be!

Ditto.

I DID SHOW YOU! The graph is so simple a 6th grader could figure it out!

Show me what it looks like when you use real data.

You have so blinded yourself you can’t even figure out a simple graph based on 10 numbers.

Explain to me how you would use your graph to predict what the next value would be.

Your data is growing linearly, but you want to weight the newer values. So you fit this curve which gives the impression the slope is declining, but the x axis isn’t useful, so presumably you need to re-scale it, as I did with the UAH data. At that point do you just get back to a linear line and predict the next value will be 11?

Reply to  Bellman
July 25, 2022 2:42 pm

So here is the same weighted technique, using a polynomial. I tried a quadratic but that didn’t look much different to the linear, so here I’m using a cubic.

20220725wuwt1.png
Reply to  Bellman
July 25, 2022 2:44 pm

And here’s the same using the proper x scale, with the normal linear regression in blue.

20220725wuwt2.png
Reply to  Bellman
July 25, 2022 2:50 pm

I’ve told you MULTIPLE TIMES that such a view is nothing more than the old argumentative fallacy of Appeal to Tradition.”

Do you actually know what appeal to tradition means?

Reply to  Tim Gorman
July 22, 2022 6:07 pm

I think this is what you are getting at. Red line shows the linear trend for your increasing repetition weighting scheme, the blue line is the non-weighted linear trend. As before the weighted trend predicts warmer temperatures at this time than the unweighted trend.

20220722wuwt1.png
Reply to  Bellman
July 22, 2022 6:19 pm

Here’s the same, but with the proper x-axis scale.

20220722wuwt2.png
Reply to  Tim Gorman
July 21, 2022 5:03 pm

Not true
Weather predictions work well for a few days
Climate predictions are notoriously inaccurate.
Extrapolating short term trends does not create a good long term climate forecast. Even extrapolating 30 to 50 years trends does not create a good climate prediction for the next 30 to 50 years.

The climate will get warmer,
unless it gets colder.

Reply to  Richard Greene
July 21, 2022 5:14 pm

Monckton is not extrapolating anything. He is finding where a break point in the slope of the trend has happened. It’s the point where the residuals between the data points and the trend line begin to grow with no end in sight up through the present date.

It identifies a point which needs to be investigated to find out why the break point happened.

All the climate alarmists can do is use the argumentative fallacy of Argument by Dismissal. They just say “It’s meaningless” without ever explaining why.

I agree with you about the climate predictions. Their uncertainty as time moves forward just grows and grows until you can’t even identify what the actual trend *is*.

Reply to  Tim Gorman
July 21, 2022 6:37 pm

He is finding where a break point in the slope of the trend has happened.

I keep asking you exactly what algorithm you think is being used here and why Monckton never claims that is what he’s doing.

It identifies a point which needs to be investigated to find out why the break point happened.

And as I keep saying, if you really believed it needed investigating, you should be investigating why there was a spontaneous increase of around 0.25°C at that point. That’s what break point analysis is meant to be alerting you to, not some insignificant change in the rate of warming.

All the climate alarmists can do is use the argumentative fallacy of Argument by Dismissal. They just say “It’s meaningless” without ever explaining why.

Stop trying to reduce this to fallacious arguments. Many people have explained why Monckton’s pause is meaningless, you just dismiss their arguments.

Reply to  Bellman
July 22, 2022 2:28 pm

I keep asking you exactly what algorithm you think is being used here and why Monckton never claims that is what he’s doing.”

You’ve been given this multiple times. Monckton explains it each and every post on the pause. And, as usual, you just totally ignore it and pretend its not there – just like you do with measurements consisting of a stated value +/- uncertainty.

“And as I keep saying, if you really believed it needed investigating, you should be investigating why there was a spontaneous increase of around 0.25°C at that point.”

El Nino. You’ve been given this multiple times as wall. The the global average has been *cooling* since the last El Nino. So the step is gradually disappearing!

See the attached chart (hat tip to Tom Abbott). It shows that after each step up you get cooling. And the cooling after the last step up has already started.

Stop trying to reduce this to fallacious arguments. Many people have explained why Monckton’s pause is meaningless, you just dismiss their arguments.”

Sorry, NO ONE, including *YOU* has ever shown how the pause is meaningless. All they’ve done, including *YOU*, is just to make the assertion that it is. You’ve just resorted to the Argument to Tradition fallacy – “the past 40 year trend is what we should be looking at, not the most recent 8 year data”.

When the residuals start growing that *is* significant. Anyone with a lick of common sense can figure that one out – which lets you out apparently. It’s like a car that starts whipping around on an icy bridge after covering miles before just fine. According to you those past miles tell you that the skidding is meaningless, just keep your foot on the gas!

Reply to  Tim Gorman
July 22, 2022 3:57 pm

I know what Monckton does. I’m asking you how your method works. Monckton looks at every start point and chooses the one that gives the longest zero trend. Your method is something about finding the point where residuals start to deviate from the trend line.

El Nino. You’ve been given this multiple times as wall.”

I’ve been pointing out the El Niño since the beginning as well. So to get this straight you think the El Niño of 2016 explains the increase in temperature at the start of the pause, but you don’t think it explains why the last 8 years have had a flat trend. If that your view?

Sorry, NO ONE, including *YOU* has ever shown how the pause is meaningless.

A few reasons:

  • The length of the pause is too short to be meaningful.
  • It’s confidence interval is such that the true trend could be anywhere between ±0.5°C / decade.
  • The effect of the pause on the overall trend since 1978 has been to increase it from 0.11 to 0.13°C / decade.
  • A continuation of the previous trend could go straight through the confidence interval of the pause.
  • Moving the start date back just a couple of years results in a trend to the present of over 0.2°C / decade. That’s a trend that is faster than the long term trend, despite the fact that 80% of it consists of a pause.
  • If the trend over 8 years is considered meaningful, why do you not also consider all 8 year trends as meaningful. This for instance, would include the trend from September 2010 – September 2018, with a rate of warming of 0.55°C / decade.
  • The pause can entirely be explained by considering ENSO conditions along with CO2.
Reply to  Bellman
July 23, 2022 6:31 am

I’m asking you how your method works. Monckton looks at every start point and chooses the one that gives the longest zero trend. Your method is something about finding the point where residuals start to deviate from the trend line.”

Unfreakingbelievable! And you think the point at which the zero trend starts is *NOT* the same point where the residuals start to grow!

You’ve lost it man!

I’ve been pointing out the El Niño since the beginning as well. So to get this straight you think the El Niño of 2016 explains the increase in temperature at the start of the pause, but you don’t think it explains why the last 8 years have had a flat trend. If that your view?”

Stop putting words in my mouth. Before the 8 yr pause we had an 18 year pause – they are separated by the El Nino. Since the El Nino the temps have been going DOWN, not up.

The trend has been zero for 8 yrs. Why do you keep trying to come up with idiotic reasons to deny that?

“The length of the pause is too short to be meaningful.”

When you include the 18 year pause with the 8 year pause you get a time span of almost twenty years. That isn’t significant?

Only to you, only to you!

It’s confidence interval is such that the true trend could be anywhere between ±0.5°C / decade.”

And you are *still* confused by confidence interval and uncertainty.

“The effect of the pause on the overall trend since 1978 has been to increase it from 0.11 to 0.13°C / decade.”

Because of the El Nino, not the pause.

Moving the start date back just a couple of years”

So what? This is no refutation of the 8 year pause being significant.

“If the trend over 8 years is considered meaningful, why do you not also consider all 8 year trends as meaningful. This for instance, would include the trend from September 2010 – September 2018, with a rate of warming of 0.55°C / decade.”

Who says it isn’t meaningful? Even it doesn’t comport with what the climate models produce! Why did the slope go UP? The climate models don’t show it! Why?

You are lost in a religious fervor over the climate alarmists dogma. Nothing that calls that dogma into question can shake your faith, you just ignore it.

Reply to  Tim Gorman
July 23, 2022 2:30 pm

Unfreakingbelievable! And you think the point at which the zero trend starts is *NOT* the same point where the residuals start to grow!

You’ve lost it man!

Calm down. I’ve no way of knowing if the point at which the zero trend starts is the same as the point where the residuals start to grow, because you refuse to explain what you mean by that. It seems unlikely to me, and if they are the same it’s more likely to be a coincidence. If Monckton were doing what you claimed, he would say so himself rather than explicitly telling us what his actual method is. It would sound much more impressive.

The trend has been zero for 8 yrs. Why do you keep trying to come up with idiotic reasons to deny that?

I’m not denying it, I’m saying it meaningless. The trend over the last 15 years has been around 0.3°C / decade. I can’t deny that, but that doesn’t mean I have to give it any specific meaning. It’s just natural fluctuations.

When you include the 18 year pause with the 8 year pause you get a time span of almost twenty years. That isn’t significant?

When you add the two pauses together the trend is 0.12°C / decade, and just about statistically significant.

So what? This is no refutation of the 8 year pause being significant.

It shows how arbitrary it is, and how it only works by cherry-picking the start date. Suppose someone found a warming trend since 1975, but someone else pointed out the trend was cooling if you start in 1973. How much credibility would you then attach to the warming trend?

You are lost in a religious fervor over the climate alarmists dogma. Nothing that calls that dogma into question can shake your faith, you just ignore it.

These snide remarks are getting tiresome, and just make it look like you are desperate to justify your own dogmatic believes.

Reply to  Bellman
July 24, 2022 8:37 am

Calm down. I’ve no way of knowing if the point at which the zero trend starts is the same as the point where the residuals start to grow, because you refuse to explain what you mean by that.”

Playing ignorant isn’t a refutation of anything. See the attached graph. The start point is somewhat indeterminate. Pick one. But the fact that the residuals start to grow is irrefutable. And that is, in essence, what Monckton is finding with his process. Does he need to go back one month or forward one month as the start point – who cares? You continue to using nit-picking as some kind of refutation.



residuals.jpg
Reply to  Tim Gorman
July 24, 2022 1:10 pm

The trouble is that graph looks nothing like the actual data. You still don’t explain what trend trend line you are using. The one up to the start date of the pause, or the one covering all the data, or what?

Here’s the graph showing the trend line based on data up to the start of the pause, projected up to the current date. I’ve marked the pause anomalies in red. As I’ve said before, the pause residuals grow, but mainly because they are warmer than the projected trend. But I’ve still no idea how you calculate September 2014 as being the point where they start to grow.

20220724wuwt2.png
Reply to  Bellman
July 24, 2022 3:59 pm

I am using *YOUR* trend line that you are so adamant can be the ONLY TRUE TREND LINE.

It has a slope going up to the right, just like the one I showed in my picture.

And when the slope changes, e.g you have a pause, the residuals begin to grow.

I didn’t really expect you to understand and you have proved me right.

Reply to  Tim Gorman
July 24, 2022 4:38 pm

I am using *YOUR* trend line that you are so adamant can be the ONLY TRUE TREND LINE

This is insane. I’ve spent the last few weeks trying to tell you there is uncertainty in a trend line, when you insisted there is none. Now you claim I’m saying there can only be one true trend line.

It has a slope going up to the right, just like the one I showed in my picture.

Again, I’m asking you which particular trend line you want to use to determine when residuals start increasing. I even gave you two possibilities to choose from. But now I supposed to guess on the grounds that it’s apparently “my” trend line.

And when the slope changes, e.g you have a pause, the residuals begin to grow.

How does the slope change? It’s a straight line.

You’ve been banging on about this method of detecting pauses by finding out when the residuals grow. You claim it gives the same results as Monckton’s method. Yet any attempt to get you to define your method results in you twisting and turning. Throwing up any distraction to evade actually providing an answer. I think it should be obvious why this is – you don’t actually know what you are talking about.

But I’ll give you one more chance – just explain what you think your method is. No hand-waving, or toy diagrams. Just tell me how you determine the month when residuals start to grow.

Reply to  Bellman
July 25, 2022 5:40 am

This is insane. I’ve spent the last few weeks trying to tell you there is uncertainty in a trend line, when you insisted there is none. Now you claim I’m saying there can only be one true trend line.”

Your “uncertainty” of a trend line is the best-fit metric. I.e. it is based on the differences between the stated values and the trend line – NO CONSIDERATION OF THE UNCERTAINTY ASSOCIATED WITH THE STATED VALUES!

Until you learn that you can’t simply ignore and dismiss the uncertainty in the measurements you’ll never get this right!

If the stated value is 12 +/-1 and the trend line point is 10 then the residual can actually range from 1 to 3 if you consider the measurement uncertainty. That is sufficient to completely change the track of the trend line when considering the uncertainty of surrounding data points.

Again, I’m asking you which particular trend line you want to use to determine when residuals start increasing.”:

For at least the fourth time, IT DOESN’T MATTER unless the prior trend is the same as the current trend. The residuals will continue to grow if the slope of the current data changes!

“You’ve been banging on about this method of detecting pauses by finding out when the residuals grow. You claim it gives the same results as Monckton’s method. Yet any attempt to get you to define your method results in you twisting and turning.”

No twisting and turning here, just an inability on your part to read a simple graph. I’ve attached it again but I don’t expect you actually understand it. You’ll just keep on claiming I haven’t explained my method.



residuals.jpg
Reply to  Tim Gorman
July 25, 2022 9:10 am

Your “uncertainty” of a trend line is the best-fit metric.

Still no idea what you think this means. It just seems to be a phrase you’ve got stuck in your head.

A trend line is a best fit, for however you are defining best fit. The uncertainty comes from the variation in the data, which can come from natural variability in the sample, or measurement error.

I.e. it is based on the differences between the stated values and the trend line – NO CONSIDERATION OF THE UNCERTAINTY ASSOCIATED WITH THE STATED VALUES!

And as I and others keep pointing out, the uncertainty associated with the stated values is present in the variation. That’s how the standard error of the regression is calculated, it’s how Taylor explains how to calculate it. It’s true whether the variation is caused entirely by measurement error, as in Taylor’s example, or if the values are all exact but there is variation in the sample, or for any combination of factors.

Does this mean the standard formula for the standard error is correct? No. There are a lot of assumptions and they can effect the uncertainty.

If the stated value is 12 +/-1 and the trend line point is 10 then the residual can actually range from 1 to 3 if you consider the measurement uncertainty.

Broadly correct, but not really true if the residual. That is an exact value. What you mean is the true value could differ by ±1. But another way of looking at it is, if the true value is 12, then the residual could have been between 1 and 3.

That is sufficient to completely change the track of the trend line when considering the uncertainty of surrounding data points.

It isn’t really. You are not dealing with just one data point. The more you have the more the discrepancies will tend to cancel.

Reply to  Bellman
July 25, 2022 11:13 am

Still no idea what you think this means. It just seems to be a phrase you’ve got stuck in your head.”

IT MEANS RESIDUALS ARE NOT UNCERTAINTY! They are a best-fit metric! They are based only on the stated value and do not consider the actual uncertainties!

A trend line is a best fit, for however you are defining best fit.”

It is a best-fit TO THE STATED VALUES. Why is that so hard to get into your head?

“The uncertainty comes from the variation in the data, which can come from natural variability in the sample, or measurement error.”

It is the variation in THE STATED VALUES. As usual you just simply ignore the uncertainties associated with the stated values! It is *not* uncertainty.

“That’s how the standard error of the regression is calculated, it’s how Taylor explains how to calculate it”

We’ve been around on this already. As usual I gave you the exact quotes from Taylor where he states this is calculating the best fit. THE BEST FIT.

You still have never internalized the fact that ERROR IS NOT UNCERTAINTY!

It’s true whether the variation is caused entirely by measurement error, as in Taylor’s example, or if the values are all exact but there is variation in the sample, or for any combination of factors.”

As usual you are confused by cherry picking from Taylor. In Chapter 8 he uses only stated values, y1, …, yn. NO MEASUREMENT UNCERTAINTIES. He then uses those values to define the residuals as y_i -A – Bx_i. Then he calculates the standard deviation of those residuals which he shows as σ_y. None of this has anything to do with the measurement uncertainty!

STOP CHERRY PICKING. DO THE EXAMPLES!

I’ve attached a graph from Taylor explaining what he is doing. Please note carefully that the point (x,y) HAS NO MEASUREMENT UNCERTAINTY associated with it. Taylor is calculating the residual from the point (x,y) to the trend line. A BEST-FIT metric, not measurement uncertainty. Δy = dy/dx * Δx.

Again, STOP CHERRY PICKING.

Broadly correct, but not really true if the residual. “

Of course it is the residual. The residual is the difference between the (x,y) point and the trend line! If your point is actually (x,y+u) then the residual changes!

tg: “ the residual can actually range from 1 to 3″
bellman: “ then the residual could have been between 1 and 3.”

Judas H. Priest! Do you see any difference between what I said and what you said? Do you *really* think you are stating something I didn’t already understand?

“It isn’t really. You are not dealing with just one data point. The more you have the more the discrepancies will tend to cancel.”

Not when you consider the uncertainty. The trend line can go through any point in the uncertainty intervals of all the data points! That’s why the trend line can range from positive, to negative, to zero when considering uncertainties larger than the differences trying to be identified!

residual_graph.jpg
Reply to  Tim Gorman
July 25, 2022 1:41 pm

Calm down. It’s really tiring to have a discussion where half of it is written in bold block capitals. It just make you look like a toddler having a tantrum.

IT MEANS RESIDUALS ARE NOT UNCERTAINTY!

They are not the uncertainty of the trend line. I’ve never suggested they are.

It is a best-fit TO THE STATED VALUES

Yes. What else do you want to get a best fit for.

It is the variation in THE STATED VALUES.

Yes. That’s the idea. You are working with the data, not to any other values.

As usual I gave you the exact quotes from Taylor where he states this is calculating the best fit. THE BEST FIT.

And then he shows how to calculate the uncertainty in that best fit.

“In Chapter 8 he uses only stated values, y1, …, yn. NO MEASUREMENT UNCERTAINTIES.

Yes. That’s what I keep telling you. It’s possible to calculate the uncertainty of a slope using just the stated values. You don’t need to use supposed measurement uncertainties, just the variation in the residuals.

Then he calculates the standard deviation of those residuals which he shows as σ_y. None of this has anything to do with the measurement uncertainty!

That’s taken you up to the end of section 8.3. Now read section 8.4. It’s called “Uncertainty in the Constants A and B”. Note, that A and B, are the constants from the linear equation y = A + Bx.

I’ve attached a graph from Taylor explaining what he is doing.

As so often you’ve misunderstood that diagram. That’s describing what happens when there is uncertainty in x as well as y. That isn’t an issue here as there is no uncertainty in the time.

The residual is the difference between the (x,y) point and the trend line! If your point is actually (x,y+u) then the residual changes!

A residual is the difference between an observed value and the predicted value.

“tg: “ the residual can actually range from 1 to 3″
bellman: “ then the residual could have been between 1 and 3.”

I didn’t explain it well, and it’s not an important point. But what I meant is that if the true value is y, and has a measurement uncertainty of ±1, and if that y is 2 more than the predicted value, then the observed residual could be between 1 and 3.

Not when you consider the uncertainty.”

Yes when you consider uncertainty. Read Taylor 8.4. Look at the formula for σ_B (Eq 8.17). You’ll need to substitute his formula for Delta, but it should come to the same equation every over source gives.

Reply to  Bellman
July 25, 2022 3:23 pm

And then he shows how to calculate the uncertainty in that best fit.”

It’s not uncertainty. It’s a best-fit metric. Uncertainty is associated with an unknown true value. There is no unknown true value for the residual between a stated value and the trend value!

You didn’t even bother to look at the graph I posted out of Taylor. The point you use to determine the residual is (x,y). It is *not* (x±u_x, y±u_y). There is no uncertainty associated with x or with y. They are stated values!

You don’t need to use supposed measurement uncertainties, just the variation in the residuals.”

If you don’t take the uncertainty of the measured values into consideration then how do you know your slope is the correct one? Once again, you fall back into the same old box, uncertainty can be ignored!

“That’s taken you up to the end of section 8.3. Now read section 8.4. It’s called “Uncertainty in the Constants A and B””

OMG! You are *STILL* cherry picking!

“Having found the uncertainty σ_y in the measured numbers y1, …, yn, we can easily return to our estimates for the constants A and B and calculate their uncertainties. The point is that the estimates (8.10) and (8.11) for A and B are well-defined functions of the measured numbers y1, …, yn. Therefore the uncertainties in A and B are given by simple error propagation in terms of those in y1, …, yn.”

“The results of this and the previous two sections were based on the assumption that the measurements of y were all equally uncertain and that any uncertainties in x were neglible.” (bolding mine, tg)

In other words, all uncertainty is y CANCELS and you are left with the stated values to calculate the best-fit residuals.

STOP CHERRY PICKING. You do *NOT* understand any of this. You are trying to cherry pick crap to throw against the wall. You see the word “uncertainty” and assume the context is the same throughout the book!

As so often you’ve misunderstood that diagram. That’s describing what happens when there is uncertainty in x as well as y. That isn’t an issue here as there is no uncertainty in the time.”

Please! You haven’t done any of the problems or actually studied the text. You’ve searched for the word “uncertainty”.

And the “uncertainty” in y is the distance between y and the equivalent point on the trend line. It is the RESIDUAL.

You can tell this from Eq. (8.20)!

σ_y(equiv) = Bσ_x. where B is the slope of the line, i.e. dy/dx.

You just keep on trying to rationalize to yourself that you can determine measurement uncertainty from only the stated values and the best-fit linear regression. You can’t. Taylor doesn’t do it and neither can you. Nor can anyone else!

It is truly that simple. Measurement uncertainty means there is an interval within which *any* trend line is just as correct as any other trend line within that interval. Taking just the stated values and ignoring the measurement uncertainty is just one more way of trying to justify ignoring the measurement uncertainty so you can ignore it.

It’s *exactly* like the climate scientists saying the uncertainty of the temperature trend is the residual fit between the stated values and the trend line. That allows them to ignore the measurement uncertainties just like you always want to do!

Reply to  Tim Gorman
July 25, 2022 4:32 pm

It’s not uncertainty. It’s a best-fit metric.

Take it up with Taylor. That’s his word.

There is no uncertainty associated with x or with y. They are stated values!

Take it up with Taylor. He’s using stated values to calculate the uncertainty.

If you don’t take the uncertainty of the measured values into consideration then how do you know your slope is the correct one?

You don’t. That’s why there’s uncertainty.

Once again, you fall back into the same old box, uncertainty can be ignored!

I’m literally telling you how to calculate the uncertainty in the trend line, which you insist doesn’t exist.

OMG! You are *STILL* cherry picking!

So now it’s cherry-picking to point you to the section of your own preferred book that explains how to do something you insist is impossible.

In other words, all uncertainty is y CANCELS and you are left with the stated values to calculate the best-fit residuals.

You cut and paste the words, but still don’t understand them.

All you have here is the stated values. That’s how you calculate the best this, and it’s how you calculate the uncertainty of that fit. And yes, there are assumption in that calculation, such as the variation being idd, e.g. that the variation doesn’t change with x.

You are trying to cherry pick crap to throw against the wall.

So you are claiming Taylor is crap now? If you don;t like that there are lots of places elsewhere you could read, including Bevington I think. There is nothing wrong with pointing to the parts of a book that explain what you need to know. There is nothing weird or unusual in calculating the standard error of the residual slope. It’s basic statistics. Only you would think that pointing it out is cherry-picking.

Please! You haven’t done any of the problems or actually studied the text. You’ve searched for the word “uncertainty”. ”

No. All I did was look in the table of contents for the chapter on Linear Regression. Looked through that chapter to see if he described how to calculate the standard error. And then passed the information on to you, hoping that you wouldn’t reject it as being too mathematical. It was a bonus that he actually uses the word “uncertainty”.

And the “uncertainty” in y is the distance between y and the equivalent point on the trend line. It is the RESIDUAL.

That’s the uncertainty in y. What I’m talking about is the uncertainty in B.

You can tell this from Eq. (8.20)!
σ_y(equiv) = Bσ_x. where B is the slope of the line, i.e. dy/dx.

Yes, that’s how you calculate the uncertainty in y. Now get back to section 8.4.

You just keep on trying to rationalize to yourself that you can determine measurement uncertainty from only the stated values and the best-fit linear regression.

Take it up with Taylor.

You can’t. Taylor doesn’t do it and neither can you.”

Apart from the bit I “cherry-picked” where he tells you how to calculate the uncertainty of A and B.

It is truly that simple.

It isn’t that simple. There are lots of complications in the real world, but the calculation of the trend and it’s uncertainties / confidence interval / standard error of the regression slope, or whatever you want to call it is common knowledge, can be found in just about any book on statistics, including the ones you claim to have read and understood. It is that simple, and it’s a mystery known only to your brain care specialist, why you refuse to see it. I’m sure there’s a proverb that explains it.

Measurement uncertainty means there is an interval within which *any* trend line is just as correct as any other trend line within that interval

Only if you assume all uncertainty intervals have a uniform distribution, and can conspire to line up in just the right way.

Taking just the stated values and ignoring the measurement uncertainty is just one more way of trying to justify ignoring the measurement uncertainty so you can ignore it.

Take it up with Taylor.

Reply to  Tim Gorman
July 25, 2022 9:18 am

For at least the fourth time, IT DOESN’T MATTER unless the prior trend is the same as the current trend.

I think at least one of us is suffering from memory loss, as I don’t remember you telling me that at all, despite repeated requests.

I can’t understand why you think it doesn’t matter which trend line you are using. One will show the residuals growing the other won’t.

No twisting and turning here, just an inability on your part to read a simple graph.

The graph has relation to reality. I’ve given you the correct graph, I’ve asked you tell me how you detect the exact month the residuals start to grow. But all you do is present the same toy graph, and insist that it should be obvious where the deviation starts. Of course, it’s obvious in your graph, because you’ve shown an actual pause with no ambiguity.

Here’s my graph again. Just tell me how you calculate the exact point where residuals start to grow. Then explain why you think the growing residuals represent a slow down rather than a speeding up.

20220724wuwt2.png
Reply to  Bellman
July 25, 2022 11:55 am

I think at least one of us is suffering from memory loss, as I don’t remember you telling me that at all, despite repeated requests.”

You don’t remember because you just dismiss anything you don’t agree with out of hand.

“I can’t understand why you think it doesn’t matter which trend line you are using. One will show the residuals growing the other won’t.”

Of course the trend line matters. The fact is that within the uncertainty boundaries you can’t be sure what the trend line is by just depending on the stated values. You *have* to consider the measurement uncertainties as well. I’ve attached a graph showing two trend lines, both within the uncertainty boundary, one with a positive slope and one with a negative slope. The residuals between them and the zero slope base, i.e. the “pause”, grow in both cases!

“Of course, it’s obvious in your graph, because you’ve shown an actual pause with no ambiguity.”

I’ve show *exactly* what Monckton has showed, a pause trend line with no ambiguity. That’s why he starts with today’s date and goes backward to find the longest period of pause.

Here’s my graph again. Just tell me how you calculate the exact point where residuals start to grow. Then explain why you think the growing residuals represent a slow down rather than a speeding up.”

I don’t know about your graph but I’ve attached the UAH one.

You can visually see a pause in warming from sometime in 1998 through sometime in 2014. You can see another one from 2016 through today. I’ve shown the growing residuals with wide dark lines on the graph. Monckton has already done the actual calculations, I see no need to reproduce them. They are visually obvious on the graph.

To everyone besides you I suspect.

two_trends.jpg
Reply to  Tim Gorman
July 25, 2022 1:58 pm

You don’t remember because you just dismiss anything you don’t agree with out of hand.”

Fine. Provide the links to the other three times you told me it didn’t matter which trend line is used.

Of course the trend line matters.

Your exact words were “For at least the fourth time, IT DOESN’T MATTER unless the prior trend is the same as the current trend.”

Can you see why I might find your explanations confusing.

The residuals between them and the zero slope base, i.e. the “pause”, grow in both cases!

And another graph that has no relation to reality. If I use the trend up to the start of the pause, the residuals grow, as in are bigger than the predicted trend. If I use the trend up to the current date, they don’t. And in neither case, is there an obvious point where the deviations start.

I’ve show *exactly* what Monckton has showed, a pause trend line with no ambiguity.”

Yes. If you make up a pause with no ambiguity you get a pause with no ambiguity. What I’m trying to establish is how you handle the real data, which has variability, and a lot of ambiguity.

I don’t know about your graph but I’ve attached the UAH one.

My graph is the UAH data. Your attached graph is not.

You can visually see a pause in warming from sometime in 1998 through sometime in 2014. You can see another one from 2016 through today.”

Then you are not seeing the same as Monckton as his first pause starts in 1997 and ends in 2015. But visually seeing something, is not a substitute for proper analysis. You can visually see lots of things, especially if you want to see them.

Monckton has already done the actual calculations, I see no need to reproduce them. They are visually obvious on the graph.

That sounds like an admission that all your claims of having a method to determine the start of the pause by looking for when the residuals start growing was just hot air. You are simply using Monckton’s cherry-pick and saying you can visually see it.

Reply to  Bellman
July 26, 2022 5:50 am

Fine. Provide the links to the other three times you told me it didn’t matter which trend line is used.”

That’s not what I’ve been saying. I have said that it is impossible to know the actual trend line within the uncertainty interval. That’s what *uncertainty* means, the true value is unknown. It’s as true for trend lines within the uncertainty interval of the measured values as it is for each measured value alone.

“Your exact words were “For at least the fourth time, IT DOESN’T MATTER unless the prior trend is the same as the current trend.””

You simply cannot grasp even the simplest of concepts. As long as the two trend lines diverge the residuals will grow. It doesn’t matter what the trend lines are, if they diverge then they diverge and the residuals between the two will grow.

Why is this such a hard concept for you to understand?

“And another graph that has no relation to reality.”

Just because *YOU* can’t grasp the concept that does not mean it has no relationship to reality!

” If I use the trend up to the start of the pause, the residuals grow, as in are bigger than the predicted trend.”

” If I use the trend up to the current date, they don’t.”

Cognitive dissonance at its finest!

And in neither case, is there an obvious point where the deviations start.”

Cognitive dissonance once again! You just said the residuals grow and now you are trying to say they don’t! ROFL!!

“My graph is the UAH data. Your attached graph is not.”

My graph is an attempt at teaching. Are you saying your trend line of the UAH data isn’t a positive slope linear trend line? That’s what I show in my graph. Are you saying that the most recent UAH data is not a horizontal line? That’s what my graph shows from using Monckton’s graph.

All you are doing is *still* trying to say that the change is slope of the recent data is meaningless. Nothing will shake your faith in the long term trend line. Not even the use of weighted values in doing forecasting will sway you from your religious belief.

Then you are not seeing the same as Monckton as his first pause starts in 1997 and ends in 2015. But visually seeing something, is not a substitute for proper analysis. You can visually see lots of things, especially if you want to see them.”

OMG! The FIRST thing we were taught in probability and statistics was to graph the data and LOOK (i.e. visually) at it in order to assess the reasonableness of the data!

Proper analysis *must* include visual inspection. That is the quickest way to see if the data appears multi-nodal or skewed. Including the measurement uncertainty allows you to see if the data will allow identifying small differences. Just jumping into calculating an average and standard deviation, as you always do, is the primrose path to perdition.

That sounds like an admission that all your claims of having a method to determine the start of the pause by looking for when the residuals start growing was just hot air.”

Oh, malarky! Your memory is just plain bad!

go here: https://wattsupwiththat.com/2022/05/14/un-the-world-is-going-to-end-kink-analysis-says-otherwise/

As should be immediately apparent to any skilled data analyst, the least-squares linear regression presented does not represent the underlying data well. Specifically, the “error” in the graph (the deviation of the estimated value from the actual values) increases dramatically after the late 1990s. This indicates that linearity of the data breaks down sometime around the late 1990s, so forecasts based largely on earlier data become invalid.”

“Select the candidate kink point that minimizes the total error. At this point, the amount of reduction in pooled estimation error (vis-à-vis a single linear regression) can be calculated easily, showing how the kinked-line model more closely matches the data set than a single linear regression. ”

YOU EVEN DID THIS YOURSELF AND POSTED THE RESULT!

And now you are claiming that the process doesn’t work!

I’m done with you on this. I have a bunch of lawnmower work lined up I need to get to and I’ll be taking some vacation later this week.

You are a troll, nothing more. I tire of feeding you!

Reply to  Tim Gorman
July 26, 2022 2:10 pm

My graph is an attempt at teaching. Are you saying your trend line of the UAH data isn’t a positive slope linear trend line? That’s what I show in my graph. Are you saying that the most recent UAH data is not a horizontal line? That’s what my graph shows from using Monckton’s graph.

I think this section gets to the heart of the problem.

Tim has convinced himself that there must be more to Monckton’s pause than I say and reaches for an idea he read on a recent blog post about diverging reciprocals. But he doesn’t really understand or test it. He thinks it’s simple because in his mind it is simple. He sees as being like the diagrams in school text books. He fondly imagines a set of points neatly forming a linear trend, then suddenly stopping, and forming a neat horizontal line, and the point of departure from the trend line is self evident.

The problem is that the real world data is just not like that, for at least two reasons.

Temperature moves up and down all the time, it’s constantly departing from the trend line, only to return months or years later. It’s always possible that the variation is the start of a change of direction, but it probably isn’t and it’s foolish to jump[ to conclusions. You need more evidence, you need a clear signal that there has been a change, not just wishful thinking.

The other problem is that Monckton’s pause, even if it’s a thing, is not what he implies. It isn’t temperatures moving up and then at some point stopping, as Tim’s cartoon graph suggests. It’s temperatures shooting up and then waiting for the trend line to catch up.

I’ll look into this in more detail later, now that Tim’s decided to stop pestering me. But the real issue that should be asked is why, if he is so sure of his facts, does he never try to test them in the real world, using the actual data? Is it the old adage of a beautiful theory being killed by an ugly little fact?

Reply to  Tim Gorman
July 26, 2022 2:27 pm

All you are doing is *still* trying to say that the change is slope of the recent data is meaningless. Nothing will shake your faith in the long term trend line. Not even the use of weighted values in doing forecasting will sway you from your religious belief.

Yes I am still saying it’s meaningless, because you’ve not been able to suggest otherwise.

But, then we get to the usual lies and strawmen. I do not have “faith” in the long term trend line. It might be correct, it might not be. All I ask is some meaningful evidence that there has been a change, or that a non linear trend is significantly better. My adage is that linear trends are rarely correct, but are usually the best first assumption.

Reply to  Tim Gorman
July 26, 2022 2:33 pm

YOU EVEN DID THIS YOURSELF AND POSTED THE RESULT!

Yes I did. But you want to ignore the conclusion. The best “kink” point is in 2012 and shows an upward trend.

20220517wuwt3.png
Reply to  Bellman
July 24, 2022 8:56 am

When you add the two pauses together the trend is 0.12°C / decade, and just about statistically significant.”

See attached graph from Tom Abbott. As you can see the trend you are looking at reversed around 2012. The end of this graph is around where the 2016 El Nino started so you have a spike at the end but the cooling trend has continued since then.

That *is* statistically significant, especially when you see the trend line changed about 2000. I.e. the first big pause. If you can’t see that the residuals from the trend line to the data started to grow then you being purposefully blind.

“It shows how arbitrary it is, and how it only works by cherry-picking the start date. “

And now we are back to your already debunked “cherry picking” claim.

Reply to  Tim Gorman
July 24, 2022 12:51 pm

Can’t see Abbott’s graph, and wouldn’t trust it if I could.

Here’s my own graph. Note, I made a mistake in where I thought the old pause started. According to Monckton it now started in January 1997, so the trend since then is only 0.11°C / decade. Still just about significant.

Reply to  Bellman
July 24, 2022 12:52 pm

See the attached graph here:

20220724wuwt1.png
Reply to  Bellman
July 24, 2022 3:55 pm

I know you wouldn’t. You can’t see anything that interferes with your delusions.

There are none so blind as those who will not see.

Reply to  Tim Gorman
July 24, 2022 4:28 pm

It’s always darkest before the dawn and a stitch in time saves nine.

Now what are you talking about. You claim to attach a graph and fail, then complain that I don’t trust the source. Maybe if you actually showed me the graph we could figure out what the rest of ramblings were about.

Reply to  Tim Gorman
July 24, 2022 12:56 pm

That *is* statistically significant, especially when you see the trend line changed about 2000.

Please explain what you mean by “statistically significant”. The trend since 2000 is 0.15°C / decade.

If you can’t see that the residuals from the trend line to the data started to grow then you being purposefully blind.

You keep throwing these hand-waving comments. Started to grow from what date, against what trend line?

Reply to  Tim Gorman
July 21, 2022 5:01 pm

You could pick a different eight year period and get a different trend. With data mining you could show am 8 year warming trend, an 8 year neutral flat trend, or am 8 year cooling trend.

Some people are too anxious to imply or declare global warming has ended. Based on history, when a global warming period ends, a global cooling period begins. Global warming is more pleasant than global cooling.

Reply to  Richard Greene
July 21, 2022 5:22 pm

Monckton starts in the present and works backwards. No cherry picking of start dates.

Monckton isn’t trying to say warming has ended. His graph just shows that CO2 can’t be the only thermostat.

T = f(CO2) + f(x) + f(y) + f(z) + …..

The primary function the models look at is f(CO2), i.e. T = f(CO2). The rest are either minor and are ignored or they are parameterized (i.e. they are guessed at, e.g cloud impacts).

If T ≠ f(CO2) then it is incumbent on the climate alarmists to identify and quantify the rest, f(x), f(y), f(z), …… so that the pause can be explained, understood, and handled in future forecasts.

But they won’t. Basically because they can’t. They just have the climate models put out a y=mx+b linear trend line forever. The models can’t even follow the RCP scenarios properly. If CO2 is the thermostat and RCP2.5 shows CO2 growth tapering off then the temperature growth should taper off somewhere in the future. But it never does. No change ever in the of the slope of trend line – just on and on and on into the future!

Reply to  Richard Greene
July 21, 2022 2:09 pm

“In 1975 NCAR reported significant global cooling from 1940 to 1975 that has since been “revised away” with no explanation.”
No-one has ever produced an actual NCAR report saying that. But what is clear is that in 1975, the only data that had been gathered was land data in the Northern Hemisphere, and only a few hundred stations at best. “Revised away” just means subsequently gathering adequate data.

Reply to  Nick Stokes
July 21, 2022 2:47 pm

You’re partially correct. It wasn’t an NCAR report. I think it was a 1975 National Academy of Sciences publication. I think the commonly referred to graph was an updated version of Budyko, 1969.

That said, the mid-20th Century cooling has not been revised away, at least not totally revised away.

https://www.woodfortrees.org/graph/hadcrut4gl/mean:12/from:1942/to:1978/plot/hadcrut4gl/mean:12/from:1942/to:1978/trend

The cooling was significant enough to temporarily halt the rise in atmospheric CO2.

The stabilization of atmospheric CO2 concentration during the 1940s and 1950s is a notable feature in the ice core record. The new high density measurements confirm this result and show that CO2 concentrations stabilized at 310–312 ppm from ~1940–1955. The CH4 and N2O growth rates also decreased during this period, although the N2O variation is comparable to the measurement uncertainty. Smoothing due to enclosure of air in the ice (about 10 years at DE08) removes high frequency variations from the record, so the true atmospheric variation may have been larger than represented in the ice core air record. Even a decrease in the atmospheric CO2 concentration during the mid-1940s is consistent with the Law Dome record and the air enclosure smoothing, suggesting a large additional sink of ~3.0 PgC yr-1 [Trudinger et al., 2002a]. The d13CO2 record during this time suggests that this additional sink was mostly oceanic and not caused by lower fossil emissions or the terrestrial biosphere [Etheridge et al., 1996; Trudinger et al., 2002a]. The processes that could cause this response are still unknown.

[11] The CO2 stabilization occurred during a shift from persistent El Niño to La Niña conditions [Allan and D’Arrigo, 1999]. This coincided with a warm-cool phase change of the Pacific Decadal Oscillation [Mantua et al., 1997], cooling temperatures [Moberg et al., 2005] and progressively weakening North Atlantic thermohaline circulation [Latif et al., 2004]. The combined effect of these factors on the trace gas budgets is not presently well understood. They may be significant for the atmospheric CO2 concentration if fluxes in areas of carbon uptake, such as the North Pacific Ocean, are enhanced, or if efflux from the tropics is suppressed.

MacFarling-Meure, C., D. Etheridge, C. Trudinger, P. Steele, R. Langenfelds, T. van Ommen, A. Smith, and J. Elkins (2006). “Law Dome CO2, CH4 and N2O ice core records extended to 2000 years BP“. Geophys. Res. Lett., 33, L14810, doi:10.1029/2006GL026152.

From about 1940 through 1955, approximately 24 billion tons of carbon went straight from the exhaust pipes into the oceans and/or biosphere.

comment image

Reply to  Nick Stokes
July 21, 2022 5:04 pm

“what is clear is that in 1975, the only data that had been gathered was land data in the Northern Hemisphere,”

Baloney

Reply to  Richard Greene
July 21, 2022 6:26 pm

Would you care to produce the supposed NCAR report?

If the claim is based on Budyko 1969, as David Middleton suggests, that was explicitly Northern Hemisphere. And there were no ocean datasets until mid 90’s.

Reply to  Nick Stokes
July 22, 2022 2:03 pm

We do now have SH reconstructions and the Mid-20th century cooling is still present. The leading hypothesis for the concurrent pause in CO2 rise is cooling of the southern oceans.

Clyde Spencer
Reply to  Richard Greene
July 21, 2022 5:52 pm

The pre-1979 surface data can’t be trusted.

If you are suggesting that NASA/NOAA and others are corrupting data, you may be correct.

However, if you are suggesting that the pre-1979 data are not fit for purpose, then I think you need to defend that. The earlier data may not have the same precision as today, and there may be some issues with sampling strategy to obtain a reliable global value. However, the utility of a single temperature for the globe is questionable anyway. The sampling protocol for older data is probably adequate for areas such as the Lower 48, and Western Europe. Which happens to be where a lot of people live. The solution is to state the temperature trends for just the well-sampled area of the globe, and not make a claim for the entire globe because sampling is still too sparse for some areas.

Tom Abbott
Reply to  Richard Greene
July 21, 2022 6:06 pm

“The pre-1979 surface data can’t be trusted.”

This can’t be trusted?

Hansen 1999:

comment image

This can’t be trusted?

Phil Jones says three time periods are equal in warming magnitude

comment image

Reply to  Richard Greene
July 22, 2022 1:17 am

The rate of warming since 1975 suggests 100% natural causes are unlikely.”
Why?

Dave Fair
Reply to  Richard Greene
July 21, 2022 11:31 am

How about the 19-year flat trend before the latest Super El Niño? At the time the “experts” said it would take a 15 to 17-year flat trend to falsify the UN IPCC CliSciFi climate models.

Also, a 0.13 ℃/decade trend during the upswing portion of an up/down approximately 60 to 70-year cycle of temperatures doesn’t engender much fear.

Reply to  Dave Fair
July 21, 2022 11:58 am

The models are programmed to make scary predictions.
They have been wrong for 40+ years
Accurate predictions are obviously not a goal

Reply to  Richard Greene
July 21, 2022 5:09 pm

“That said, the mid-20th Century cooling has not been revised away, at least not totally revised away.”

The “revisions” were sufficient to make the cooling trend smaller than the margin of error claimed for the measurements. That’s close to revised away.

Ron Long
Reply to  Richard Greene
July 21, 2022 11:37 am

Calm down, please, Richard. Lord Monckton is not choosing starting points, he is using a linear regression, which starts at the posting of new data each month, to calculate back in time how far the linear regression is without increase. Your ranting and raving is off the mark.

Derg
Reply to  Ron Long
July 21, 2022 12:01 pm

Exactly, meanwhile CO2 climbs

Reply to  Ron Long
July 21, 2022 12:01 pm

Monckton specializes in meaningless short term data mining that has no value in predicting the future climate.
Linear regression does not change that fact.

I calmly explained why Monckton is not helping climate realists with his data mining. “Ranting and raving” is how you describe anyone you disagree with.
A meaningless character attack.

Felix
Reply to  Richard Greene
July 21, 2022 12:25 pm

Personal attacks in violation of your own demand to stop personal attacks IS ranting.

Ron Long
Reply to  Richard Greene
July 21, 2022 12:39 pm

Richard, I think “ranting and raving” is appropriate based on the nature and style of your denigration of Lord Monckton, who appears to me to be a sincere and capable person.

Reply to  Ron Long
July 21, 2022 5:11 pm

I’m sure he is sincere.
But he is implying eight year trends are important when they are most likely random variations of a complex system.

Reply to  Richard Greene
July 21, 2022 1:15 pm

As I just posted, short term trends *can* become long term trends. That’s why in forecasting you *have* to give more weight to current data than past data.

Assume you are forecasting the capital investment in a telephone central office where the main independent variables are population growth and secondarily on penetration. Do you continue invest at the long term rate over the past 20 years even though both population and penetration are both showing short term flattening?

Reply to  Tim Gorman
July 21, 2022 5:15 pm

How about this novel idea — stop making climate forecasts. They are consistently wrong and only serve to scare people. The past eight years might be the start of a new trend, or it might not.
So it is not very important.
Why not the past 20 years?
Or the past 40 years?

Reply to  Richard Greene
July 22, 2022 6:11 am

If the forecast includes appropriate uncertainty (not just best-fit metrics) but true measurement uncertainty, then people can judge the credibility of the forecast for themselves.

Certainly, however, a forecast that is nothing but a y=mx+b based on the past 20 years or 40 years is not appropriate. Most people that are 50 yrs or older and live in fly-over country automatically understand that the climate is cyclic, they have *lived* through it.

We aren’t actually looking at just the past 8 yrs. The pause is actually over 20 yrs long, interrupted by an El Nino event. That *should * be long enough to cause even the climate alarmists to question the how much impact CO2 actually has on temperature. It’s certain that CO2 in the atmosphere has increased substantially over the past 20 yrs yet we are not seeing a linear trend line of continuously increasing temperatures. That can’t just be dismissed by handwaving excuses like “the heat is hiding in the deep ocean” or “it’s just noise or natural variation”. Random noise or random natural variation would have heating and cooling interweaved over a 20 year period. If you look at Monckton’s data that is *exactly* what you see, interleaved higher and lower values whose average comes out to zero. See attached.

image_2022-07-22_080904074.png
Meab
Reply to  Richard Greene
July 21, 2022 1:38 pm

Monckton’s updates, in fact, ARE valuable. They show that climate factors other than CO2 that affect global temperature trends are at least equally important. We know what some of these factors are but our ability to predict them is essentially non-existent.

Since we cannot predict these factors, how can we possibly know how long a pause will be? Do we really know that the 42-year trend will continue? Hint; we don’t.

What we actually know is that the direct GH effect from CO2 is small, about 1 deg C for a doubling (we’re nowhere near a doubling). We’ve seen non-linear feedback theories that predict that CO2 increases will cause increases in water vapor (a stronger GH gas) that will lead to very large temperature increases but the large temperature increases have not happened – all but ruling out a large feedback effect.

Monckton is appropriately pointing out with actual data that we need to develop a better understanding of natural climate factors before we put too much faith in any climate prediction.

Reply to  Meab
July 21, 2022 2:50 pm

+100

Dave Fair
Reply to  Richard Greene
July 21, 2022 5:31 pm

Richard, please give us an example of data that has “… value in predicting the future climate.”

Reply to  Dave Fair
July 22, 2022 7:30 am

A coin
Flip a coin
More accurate than climate computer games.
Humans have not yet demonstrated any ability to predict the future climate.
SO WHY DO WE NEED ANY PREDICTIONS?

How about an honest analysis of the effects of global warming in the past 47 years? With almost eight billion first-hand witnesses of some or all of that global warming. How did the warming affect them, assuming they even noticed it?

If global warming continues, we should have an honest appraisal of how past global warming actually affected real people. Not some computer game prediction of much faster global warming in the future — a wrong prediction that began in the 1970s and has been wrong for about 50 years … so far.

Reply to  Richard Greene
July 22, 2022 3:58 pm

You are echoing what Freeman Dyson, the noted physicist, said a number of years ago. To have any legitimacy the climate models have to be holistic and look at the entire biosphere, not just temperature and CO2.

The climate scientists denigrated him over that and they are still doing so.

Dave Fair
Reply to  Richard Greene
July 22, 2022 10:06 pm

Yep.

Reply to  Richard Greene
July 24, 2022 1:51 pm

“I calmly explained why Monckton is not helping climate realists with his data mining.”

And you did so without understanding how these haitatuses can be used to disprove the models.

Reply to  TimTheToolMan
July 24, 2022 4:14 pm

Yep.

Reply to  Ron Long
July 21, 2022 4:52 pm

He looks back at every possible start month and chooses the one that gives him the longest possible flat trend.

I still find it incredible that people don’t think this amounts to choosing a starting point. How do you think his method would differ if he was choosing a start point?

Reply to  Bellman
July 21, 2022 5:09 pm

You find it incredible because you are jealous that he thought about doing it instead of you.

It is a valid analysis of the data and is meaningful.

He *chooses* the present date. He *finds* the earliest date.

Choosing and finding are *NOT* the same thing except in your fevered mind!

Reply to  Tim Gorman
July 21, 2022 6:21 pm

Thanks, I needed a good laugh after the last few days.

It is a valid analysis of the data and is meaningful.

Only to people who know nothing about statistics.

He *finds* the earliest date.

And by a staggering coincidence the s=date he “finds” is always the date he would have chosen, if he was trying to choose the date that gives him the longest possible zero trend.

Choosing and finding are *NOT* the same thing

In order to choose the best date you fist have to find it. If I examine every cherry until I’ve found the biggest one, and then pick it, did I find the biggest cerry or dis I choose to find the biggest cherry?

For some reason, some here think that calculating the best start date for your purpose is less of a problem than randomly choosing it.

Reply to  Bellman
July 22, 2022 2:14 am

At what point will you accept that temperatures are NOT increasing, contrary to predictions by models? How many years of little or no increase before you accept that the CAGW hypothesis is WRONG?

Reply to  Graemethecat
July 22, 2022 5:46 am

When someone can supply statistically significant evidence that it’s happening. Even before that, I might accept that there was a high probability that temperatures had stopped warming, if there is clear evidence for that, and preferably had a clear description of what is being claimed, and that it could not be easily explained by ENSO conditions.

If you want to prove that increasing CO2 is not causing warming, then you will have to better. It’s possible for warming to stop for a while despite increasing CO2, it just requires a stronger cooling effect.

At present, none of these cherry picked pauses are doing anything to suggest either warming has stopped or that there is no correlation between warming and CO2. On the contrary, the last 8 years have strengthened both the warming trend and the correlation with CO2.

Reply to  Bellman
July 22, 2022 3:39 pm

It’s possible for warming to stop for a while despite increasing CO2, it just requires a stronger cooling effect.”

WHAT cooling effect? None of the models or the model designers know what the cooling effect is. So how can they judge what the warming factor is?

“At present, none of these cherry picked pauses are doing anything to suggest either warming has stopped or that there is no correlation between warming and CO2”

Of course they do! If T = f(CO2) and f(CO2) goes up but T doesn’t then there is *NO* correlation between the two.

If T = f(CO2) + f(x) + f(y) + ….. then WHAT IS x AND y? Do *you* know? If you don’t then how can you possibly judge the significance of the pause in temperature? We *know* that f(CO2) has gone up but temp hasn’t! You can claim that isn’t significant but you have never been able to actually show why!

“On the contrary, the last 8 years have strengthened both the warming trend and the correlation with CO2.”

According to the Met Office we are no warmer in 2021 than we were in 2016. What happened to the increasing warming trend?



annual-global-temperature-forecast-graph-v6_met_ofc.png
Reply to  Tim Gorman
July 22, 2022 4:16 pm

It was a hypothetical cooling effect to explain that a significant pause or cooling trend, does not necessarily prove that CO2 is having no effect.

You really need to try and read what I’m saying, rather than jumping to conclusions.

Of course they do! If T = f(CO2) and f(CO2) goes up but T doesn’t then there is *NO* correlation between the two.

You keep making these assertions, but nether do any actual work to test your believes. What is the correlation between CO2 and temperatures using only data up to September 2014? What is the correlation when you include data up to present?

I can show you once again, that the effect of the pause has been to strengthen the correlation, but you won;t believe me, and at the same time you won;t work it out for yourself. So you’ll continue to make these assumptions based on a cherry picked set of data.

According to the Met Office we are no warmer in 2021 than we were in 2016.

Try to keep up, we are only talking about UAH data here. But again you use a common sense fallacy. I say the effect of the last 8 years has been to increase the overall warming trend, and you dismiss this on the grounds that 2021 wasn’t as warm as 2016.

All you have to do is run your own linear regression over the data to confirm what I’m saying.

UAH:
Dec 1978 – Sep 2014, 0.11°C / decade
Dec 1978 – June 2022, 0.13°C / decade

HadCRUT4:
Jan 1975 – Sep 2014, 0.17°C / decade
Jan 1975 – Dec 2021, 0.18°C / decade

Reply to  Bellman
July 22, 2022 6:24 am

Only to people who know nothing about statistics.”

No, to people that are even halfway familiar with forecasting. You are the type of person that has to be warned by the phrase “past performance is no guarantee of future returns” in financial ads.

“And by a staggering coincidence the s=date he “finds” is always the date he would have chosen, if he was trying to choose the date that gives him the longest possible zero trend.”

Your fevered mind is showing again. How could he choose what he doesn’t know? He *FINDS* what he doesn’t know!

“In order to choose the best date you fist have to find it.”

ROFL!! Finding comes first! Exactly opposite of what you have been claiming! You can’t cherry-pick what you don’t know!

 did I find the biggest cerry or dis I choose to find the biggest cherry?”

Again, ROFL!!! Your claim is that he is cherry-picking the start date of the pause! And then you use an example of having to FIND the biggest cherry!

Finding is *NOT* cherry-picking!

Reply to  Tim Gorman
July 22, 2022 11:10 am

If you didn’t spend so much time rolling about on the floor, and actually tried to engage with what I’m saying, we might get somewhere.

No, to people that are even halfway familiar with forecasting.

Once again, nobody is forecasting anything at this point.We are talking about the past and present, not the future.

How could he choose what he doesn’t know? He *FINDS* what he doesn’t know!

He looks for what he wants to find, and when he finds it he chooses it as opposed to choosing another date.

We can keep playing these word games all day. It isn’t going to get us anywhere, unless you define your terms.

Finding comes first!

Which is the problem.

You can’t cherry-pick what you don’t know!

Which is why you need to find it first.

If you chooses things at random without knowing what you will find, you are doing things correctly. Statistical inference is usually based on the assumption that the data is randomly chosen. Any attempt to find the data that will prove your point before hand is cherry-picking.

Your claim is that he is cherry-picking the start date of the pause! And then you use an example of having to FIND the biggest cherry!

And again, I’ve no idea how you think you can choose the biggest cherry before finding it.

I would still like you to say what you think cherry-picking the start date would look like, and how it would differ from what Monckton does.

Reply to  Bellman
July 23, 2022 6:13 am

Once again, nobody is forecasting anything at this point.We are talking about the past and present, not the future.”

If past isn’t future then what is the future?

“He looks for what he wants to find, and when he finds it he chooses it as opposed to choosing another date.”

So what? Cherry-picking requires you to *know* where to start, not where to end your “finding” process.

“If you chooses things at random without knowing what you will find, you are doing things correctly. Statistical inference is usually based on the assumption that the data is randomly chosen. Any attempt to find the data that will prove your point before hand is cherry-picking.”

You do *NOT* pick data points at random in a time sequence. You follow the time sequence.

You are back to throwing crap against the wall to see if something sticks. STOP IT!

And again, I’ve no idea how you think you can choose the biggest cherry before finding it.”

Go look up the definition of “cherry picking”.

“I would still like you to say what you think cherry-picking the start date would look like, and how it would differ from what Monckton does.”

If you begin with an arbitrarily chosen starting data point and go forward then you have “cherry picked” the start date. If you start with the most recent data point and worked backwards you have *NOT* cherry-picked anything.

Get out of that box of delusions you live in!

Reply to  Tim Gorman
July 23, 2022 2:42 pm

If past isn’t future then what is the future?

Wow man. Too heavy for this time of night.

If you start with the most recent data point and worked backwards you have *NOT* cherry-picked anything.

Apart from the start date.

There are over 500 possible start dates, and you are finding the one that will maximize your claim, i.e. find the longest pause. Any statistical inference drawn from that choice of start dates has to take into account the large range of possible start dates you could chosen but didn’t.

Reply to  Bellman
July 22, 2022 7:33 am

A honest less biased presentations:

Present all the UAH data since 1979, as context, and then add: By the way, there has been no warming since the warm 2015 / 2016 period affected by a large El Nino heat release. that flat short term trend could be the beginning of a new long term trend or just a random variation of a complex climate system.

Reply to  Richard Greene
July 22, 2022 9:15 am

I’ve given the context for UAH data many times.

Of course the short term trend could be the beginning of a new trend, but it could just as easily be the start of a trend warming at twice the previous rate. I prefer to be skeptical, and actually wait for evidence of a change.

20220707wuwt1.png
Reply to  Bellman
July 23, 2022 4:23 am

Of course the short term trend could be the beginning of a new trend”

So you are finally starting to come around. Soon you’ll be claiming it was *you* that first pointed this out. Unfreakingbelievable.

I prefer to be skeptical, and actually wait for evidence of a change.”

The evidence of a change is two multi-year pauses separated by an impulse from the 2016 El Nino.

*Something* isn’t right with the models, they don’t show this.

Reply to  Tim Gorman
July 23, 2022 2:46 pm

So you are finally starting to come around. Soon you’ll be claiming it was *you* that first pointed this out. Unfreakingbelievable.

Flagrant quote mining. Read the rest of the paragraph you are quoting.

The evidence of a change is two multi-year pauses separated by an impulse from the 2016 El Nino.

And as I’ve said else where the trend over these two pauses is almost identical to the overall trend. As always, you want to claim that warming is caused by the El Niños, but ignore the fact that starting a trend just before the El Niño spikes is what causes the appearance of a pause.

Derg
Reply to  Richard Greene
July 21, 2022 12:00 pm

And yet CO2 climbs and climbs 😉

Dave Fair
Reply to  Derg
July 21, 2022 5:33 pm

And will continue to climb for the foreseeable future.

Felix
Reply to  Richard Greene
July 21, 2022 12:23 pm

Looks to me like he intentionally and explicitly shows the time back to the current temperature. What is deceitful about that? Were you expecting a completely flat temperature during the interval? Seems like your real complaint is about the word “pause”.

Clyde Spencer
Reply to  Richard Greene
July 21, 2022 5:42 pm

It appears to me that you don’t understand how he derives the zero trend line.

Reply to  Clyde Spencer
July 22, 2022 6:59 am

yep.

Reply to  Richard Greene
July 22, 2022 1:18 am

42 years is climate.”
Says who?

Reply to  Mike
July 22, 2022 7:36 am

42 years is better than 8 years

paul courtney
Reply to  Richard Greene
July 22, 2022 9:54 am

Mr. Greene: Why do you say Moncton is making a prediction about future climate? Seems to me he is not doing that, instead he is debunking the prediction of CLiSci that CO2 rise means temp must rise. You’re 110% certainty is your worst enemy. As a fellow TCM watcher, I’m trying to be helpful. You can do better if you oppose the CliSci hoax. Consider that Moncton is using availabe data produced by others to show a brief anti-CliSci trend that debunks AGW. What are we fighting about?

Reply to  paul courtney
July 23, 2022 6:05 am

We’ve actually had two pauses, an 18 year one and an 8 year one, separated by the temperature impulse from the 2016 El Nino.

It’s been cooling since that El Nino while CO2 has continued to climb. Makes one wonder what it will take to cause the CAGW advocates to begin questioning the relationship between CO2 and temperature. Models are not data!

Reply to  Richard Greene
July 21, 2022 11:34 am

“… since the latest global warming trend began during the 1690s …”. Is that when Man started burning fossil fuels and increasing the atmospheric CO2 content? But we are told that it all started in 1850, even though the Keeling Curve doesn’t start to rise until the mid-1900’s.

Reply to  Retired_Engineer_Jim
July 21, 2022 12:06 pm

There is no evidence man made CO2 had any measurable effect on the global average temperature before 1940 because there were little man made CO2 emissions before 1940 (weak economies during the 1930s).

There has been warming and cooling by 100% natural causes for 4.5 billion years. The IPCC arbitrarily declared in 1995 that natural causes of climate change were just “noise”. That’s politics, not science.

twobob
Reply to  Richard Greene
July 21, 2022 3:41 pm

So, the First world war and the build up to it had no effect, Strange That!

Reply to  twobob
July 21, 2022 5:19 pm

We did not have accurate global temperature data before 1920. Very poor coverage of the Southern Hemisphere and questionable ocean measurements with buckets and thermometers. And CO2 emissions growth was low.

Clyde Spencer
Reply to  Richard Greene
July 21, 2022 6:07 pm

We did not have accurate global temperature data before 1920.

You said 1979 earlier. Which is it?

Reply to  Clyde Spencer
July 22, 2022 7:42 am

I thought we had decent 1940 to 1979 measurements — not with the better surface coverage of satellites — but decent data.

But then the 1940 to 1975 global cooling was practically “revised away”. So I have no confidence in the “new” 1940 to 1975 numbers, because I have never read a good explanation of why the original 1940 to 1975 numbers were wrong

Ronald Havelock
Reply to  Richard Greene
July 23, 2022 10:41 am

There is no evidence that man-made CO2 has any measurable effect on global temperature even now!

Ronald Havelock
Reply to  Retired_Engineer_Jim
July 23, 2022 10:39 am

The “Keeling curve” is what a lot of the global warming hysteria is all about because it goes up and up, ignoring the fact that it is a tiny component of the air we breath. What is never explained is that the “curve” is about as linear and predictable as a “curve” can get. It is data taken at an unusual height at a very unusual spot, very close too an active volcano on an island in the middle of the largest ocean where human activity in the form of population increase, cement production, increased vehicle and air traffic is ruled out as irrelevant, generated by father and son scientists, using only one method. This is “science”?

MarkW
Reply to  Richard Greene
July 21, 2022 11:44 am

And another one of those people who actually believe that starting today and counting backwards is “cherry picking”.
I guess when you can’t actually attack the methods, you have to grasp at something.

Reply to  MarkW
July 21, 2022 12:57 pm

Meaningless short term data mining with no predictive ability for the future climate. Monckton tries to convince people he’s on to something big. That’s his ego speaking.

Bob boder
Reply to  Richard Greene
July 21, 2022 1:16 pm

As apposed to meaningless long term models who’s predictions have failed to the point of falsifying themselves? Lord M’s states over and over that his pause calculation is not intended to be predictive, it’s just a point of interest. His real work is his models that show the failings of the climate alarmist community and the IPCC.

Reply to  Bob boder
July 21, 2022 5:21 pm

8 years of actual data are better than always wrong long term climate computer games, but that’s not saying much

Matt G
Reply to  Richard Greene
July 21, 2022 11:56 am

“The (UAH) linear warming trend since January, 1979 still stands at +0.13 C/decade (+0.11 C/decade over the global-averaged oceans, and +0.18 C/decade over global-averaged land).”

The Arctic warmed between 1975/76 until 2010 around 1.8c

Between the 1940’s and 1970’s there was about -0.12c/decade using surface stations (this still stands despite being increasingly removed for the agenda) and the Arctic cooled around 1.8c.

With the Arctic cooling and warming at the same rate it would be logical to assume the planet may have changed similar too.

Taking this into account since the 1940’s it only stands at +0.005C/decade, being the recent warming period has only been slightly more in change than the previous cooling period.

Reply to  Matt G
July 21, 2022 1:03 pm

In 1975 the change in average temperature between the coldest and warmest month in the 1940 to 1975 period was reported by NCAR at almost -0.5 degrees C. I know that is data mining, but it shows just how much cooling was “revised away”. Try to find 1940 to 1975 global cooling on current surface global average temperature compilations.

The warming since 1979 is faster than expected from 100% natural changes. +0.13 degrees C. per decade is significant if you trust UAH data. I do. The next 42 years could have a similar trend, or a completely different trend. Extrapolation forward of a prior 30 to 50 year temperature trend does not seem to predict well.

Reply to  Richard Greene
July 21, 2022 2:45 pm

Why do you trust UAH to identify such a small value? 0.013C per year?

What is the measurement uncertainty of the radiation from the atmosphere? Since things like dust can affect that how do you quantify it over the entire globe?

What is the uncertainty of the measurement device in the satellite? Not the resolution capability of the sensor but the actual calibration uncertainty over time?

What is the uncertainty of the conversion algorithm used to convert the emission value to temperature value?

Trusting UAH would require that the total of all these uncertainties would have to be less than 0.013C or you couldn’t actually know if you’ve identified a difference or not.

I, myself, trust UAH more than the surface data but I still don’t “believe” UAH is accurate, not by any measure.

Dave Fair
Reply to  Tim Gorman
July 21, 2022 5:38 pm

As the gambler said when he was told the dice game was crooked: “But its the only game in town.” Come up with something better than UAH6 and people might listen to you.

Reply to  Dave Fair
July 22, 2022 6:53 am

This is the old False Dilemma argumentative fallacy. The gambler can start his own game. The gamble can move on to another town (i.e. come up with something better).

Would you use a measuring tape you *know* is inaccurate to build your next house even if it is the only measuring tape in town? You *could* of course do so. But would you trust the result enough to start ordering kitchen cabinets, furniture, and appliances? What if they don’t fit in the areas you have built for them?

Sweet Old Bob
Reply to  Richard Greene
July 21, 2022 3:30 pm

“The warming since 1979 is faster than expected from 100% natural changes. ”

Expected ??

By whom ?

Where is there any scientific logic in that statement ?

Reply to  Sweet Old Bob
July 21, 2022 5:31 pm

Expected is an opinion by scientists, not a proven fact.
Increasing CO2 should cause some warming
There was warming from 1975 to 2020
with a strong positive correlation of CO2 and temperature. CO2 may have been one cause of the warming.

In recent centuries we did not have such a fast
rate of warming, over a 45-year period, before there were large manmade CO2 emissions (all natural climate change).

DaveS
Reply to  Richard Greene
July 22, 2022 1:38 am

But you’ve said that we don’t have reliable data before 1979. Or before 1920 – you changed your mind between posts. If we don’t have reliable data for recent centuries, how do you know what the rates of warming were?

Reply to  DaveS
July 22, 2022 6:17 am

He doesn’t know. He seems to be a troll that argues black is white and white is black in order to increase the number of clicks and replies to his posts.

Reply to  DaveS
July 22, 2022 7:49 am

I said there are no global data before 1920.
Too few Southern Hemisphere measurements, and poor N.H. coverage outside of the US and Europe, and Eastern China.

I said the 1940 to 1975 numbers were significantly revised to show less cooling, which is very suspicious.

I said UAH since 1979 is capable of decent measurements. It reflects a global warming trend that is slower than the surface measurements. Significant, but certainly harmless and not CAGW.

Tom Abbott
Reply to  Richard Greene
July 22, 2022 3:29 am

“In recent centuries we did not have such a fast
rate of warming, over a 45-year period,”

Incorrect:

comment image

Reply to  Tom Abbott
July 22, 2022 7:55 am

Pre 1920 is not global data and Northern Hemisphere coverage is not very good until after WWII.

You are also comparing a 47 year period from 1975 to 2022 with shorter periods, in years where surface coverage was biased by too much infilling. You will not find another 47 year period with a warming rate similar to the 47 years from 1975 to 2022 in recent centuries.

paul courtney
Reply to  Richard Greene
July 22, 2022 9:59 am

Mr. Greene: You claim no reliable data, then tell us about “recent centuries”, about which you have- no reliable data. See the problem?

Reply to  Richard Greene
July 22, 2022 8:03 pm

Richard, you need to click on the chart as it shows the data stopped at 2009.

Dr. Jones a noted warmist scientist made the point that all warming trends are similar.

Matt G
Reply to  Richard Greene
July 21, 2022 5:25 pm

The current adjusted surface global average temperatures have slowly decreased there cooling over the decades between the 1940’s and 1970’s. It is certainly wrong when the Arctic cooled similar to what it has warmed recently.

I would support +0.13c per decade being quite normal for global cloud albedo declining on average 1% per decade.

These two near 40 year periods are very similar in warming.

https://www.woodfortrees.org/plot/hadcrut4gl/from:1910/to:1950/plot/hadcrut4gl/from:1985

Matt G
Reply to  Matt G
July 21, 2022 5:33 pm

Correct the cooling to more like what it should be and the recent warming compared looks very modest.

https://www.woodfortrees.org/plot/hadcrut4gl/from:1910/to:1950/plot/hadcrut4gl/from:1985/offset:-0.4

Dave Fair
Reply to  Matt G
July 21, 2022 5:39 pm

+42X42^42

Tom Abbott
Reply to  Matt G
July 22, 2022 3:34 am

“The current adjusted surface global average temperatures have slowly decreased there cooling over the decades between the 1940’s and 1970’s. It is certainly wrong when the Arctic cooled similar to what it has warmed recently.”

North America shows a similar cooling to the arctic cooling, where it cooled about 2.0C from the 1930’s to the 1970’s (about 1.5C of cooling from the 1940’s to the 1970’s:

comment image

Matt G
Reply to  Tom Abbott
July 22, 2022 11:08 am

Look like about 1c decrease using the 5-year mean.

Tom Abbott
Reply to  Richard Greene
July 21, 2022 6:37 pm

“The warming since 1979 is faster than expected from 100% natural changes.”

The warming was similar three different times in the past. The first two were caused by natural changes, but the last and current one was not? The last and current one looks strikingly similar to the previous two. You claim the current warming couldn’t be natural, without any evidence. The evidence actually points to today’s warming being natural since the Little Ice Age.

I do like this chart.

comment image

Tom Abbott
Reply to  Matt G
July 21, 2022 6:31 pm

“Between the 1940’s and 1970’s there was about -0.12c/decade using surface stations (this still stands despite being increasingly removed for the agenda) and the Arctic cooled around 1.8c.”

The U.S. cooled about 2.0C from the 1930’s to the 1970’s:

comment image

Crispin Pemberton-Pigott
Reply to  Richard Greene
July 21, 2022 12:09 pm

Global warming continued even after the 1940 to 1975 cooling trend.”

No it didn’t. Get better facts.

Cooling was from 1940 to 1977, with sea ice increasing until 1979. Warming started again from 1978 to 1998 then stopped for a number of years. Fiddling with the data began in earnest during those latter years in an attempt to make it look as if CO2 concentration automatically translates into warming temperatures. From the evidence, it does not.

There was some warming in the 21st century when a super El Nino held fast, and no net warming since. Your claim above that we are seeing steps that are tending upward is true: with a warming rate of ~1.1 deg per century. There is noting scary about 1.1 degree warming per century. The rate from 1920-1940 was far in excess of that and look – nothing happened of any consequence (except the global cooling that followed).

“Those trends were meaningless variations within a longer term warming trend.”

How true! Congratulations. I will not listen to people claiming that short term warming is indicative of any longer warming trend.

Century long warming trends are thankfully appreciated because when the world is warmer, everything gets better. The last thing we want, the worst of all possible futures, is one in which the world cools by one or two degrees globally as it appears to be doing at the moment. Quiet sun = dropping temperatures, shorter growing season, less food, more hungry people, plus we have an ideological war on “fossil” fuels, the stupidity of printing truck loads of fiat currency, rising global sovereign debt in the trillions, rising interest rates; all point to a cold, hungry, impoverished world in the immediate future. What else? Maybe read the book “Lucifer’s Hammer”.

I have ignored your back-biting about Lord Monckton. If you want to do something useful, refute his irreducibly simple climate forecasting formula and while you are at it, his paper on positive feedbacks and his claim that the IPCC warming feedback rate is impossible. You would gain considerable standing in the global warming community, were you to be successful. They have all failed at that task.

Reply to  Crispin Pemberton-Pigott
July 21, 2022 1:14 pm

Every global average temperature compilation shows significant global warming from 1975 to 2020. Do you have better data?
You don’t.

Monckton seems to want attention (for data mining) that he does not deserve.

I Iove global warming here in Michigan and favor a lot more CO2 in the troposphere to improve plant growth, if put there by burning fossil fuels with modern pollution controls.

I don’t like data mining.

There is no proof the IPCC warming positive feedback rate ever existed. It’s an unproven theory with no evidence, used to make scary climate predictions. How can you disprove a prediction without waiting for many decades? The feedback rate prediction is from the 1970s and has not been observed since then. It’s always “coming” in the future.

I have a theory that it is impossible to prove anything.
But I can’t prove it.

Robert Austin
Reply to  Richard Greene
July 21, 2022 2:44 pm

You appear to savour the term “data mining”. It’s fair game to critique Monckton’s work but using inappropriate terminology and using it in a vituperative manner lessens the value of your argument.

Reply to  Robert Austin
July 21, 2022 5:34 pm

When you have a 42 years of UAH data, with two very large El Nino heat releases in 1998 and 2015/2016, and you choose to show only eight years of the 42, with 2015 and 2016 as your starting years, that’s biased data mining.

Robert Austin
Reply to  Richard Greene
July 21, 2022 6:40 pm

If your objective is to show how far back from the present a pause in warming extends, then you can only go back eight years with a flat trend. So in effect, the full extent of the data available is used in that it shows at most an eight year flat trend extending back from the present.. You may legitimately espouse that measuring the short term “pause” is a useless fact but it is not deceptive, it is not cherry picking and it is certainly not a “data mining fraud”.

Reply to  Richard Greene
July 22, 2022 6:34 am

I’m sorry, you just aren’t getting it! This isn’t actually data mining at all!

If I size the next addition to my central office based on a 42 year long linear regression line instead of looking at the past five year data just how far off will my sizing of the addition be?

It’s the same with climate. Climate is a cyclical phenomenon, it is *not* a linear growth forever and ever. As a cyclical phenomenon there will be periods of temp growth and temp decline. You *have* to look at recent data in order to determine what is going on at any point in time.

Ask *any* reputable forecaster if it is valid to weight data from 40 years ago equally with data from eight years ago in order to determine the future. On ANYTHING, not just climate!

H*LL, just ask any farmer if they think they should make crop and planting decisions based on what happened 40 years ago instead of the past five years! They will laugh in your face! Even the old timers that dropped out of elementary school!

Tom Abbott
Reply to  Crispin Pemberton-Pigott
July 22, 2022 3:42 am

“Cooling was from 1940 to 1977, with sea ice increasing until 1979. Warming started again from 1978 to 1998 then stopped for a number of years. Fiddling with the data began in earnest during those latter years in an attempt to make it look as if CO2 concentration automatically translates into warming temperatures.”

I think that’s correct. You see, the alarmists were thinking that 1998 was going to be one more step to higher temperatures which would tend to verify their CO2 demonization delusions, but something happened after 1998: The temperatures started cooling and cooled by about 1.0C after 1998. This is when the serious manipulation of the temperature records began. This is when James Hansen said, “Well, maybe 1934 wasn’t that hot”.

comment image

These Climate Change Fraudsters ought to all go to jail for the harm they have caused, and are causing, our society.

Carlo, Monte
Reply to  Richard Greene
July 21, 2022 12:11 pm

This “cherry picking” by CMoB accusation is a dead horse, stop flogging it.

Tom Abbott
Reply to  Carlo, Monte
July 22, 2022 3:49 am

Really! It’s getting tiresome.

Felix
Reply to  Richard Greene
July 21, 2022 12:19 pm

I thought you objected to personal attacks.

Rick C
Reply to  Richard Greene
July 21, 2022 12:55 pm

I know a guy who keeps track of consecutive numbers that come up in Roulette. He studies these results and swears that from them it’s possible to discern trends in the results that can be used to improve winning odds in future spins. There are even video’s online that proport to demonstrate how to use tends to win at Roulette. Casinos love these folks. The reality is that in highly variable data with an unknown uncertainty it is easy to fool yourself into thinking you see tends that are nothing but random variation in noise.

Still it is common and effective practice to use short term trends in process quality control to determine whether a process is in or out of control. But this requires:

  1. a process capable of being controlled.
  2. parameter measurements precise enough to accurately determine process results – e.g. measurement uncertainty < 25% of 1 standard deviation.
  3. Truly random sampling.
  4. Normally distributed data with a stable mean and S.D.

If these conditions are met then the Western Electric Rules

https://en.wikipedia.org/wiki/Western_Electric_rules

can be used to determine when the process might be out of control and require correction. I don’t think GAT meets these requirements.

Reply to  Richard Greene
July 21, 2022 1:01 pm

You are, in essence, using the argumentative fallacy of Appeal to Tradition. You are giving old data the same weight as new data – a forecasting no-no. New trends, especially over the short term, do sometimes become long term trends, that’s why you have to be able to identify the short term trends and watch them. You are like a horse with blinders, all you can see is the part of the picture – till you get blindsided by something you couldn’t see coming.

The other issue is that the history of the planet and solar system is one of cycles. Linear trends based on time dependent cycles is also fraught with danger since you don’t know what you are actually basing the linear trend on. How do you trend a sine wave?

Reply to  Tim Gorman
July 21, 2022 5:38 pm

An eight year flat trend has no use for climate forecasting.
Climate forecasting is notoriously inaccurate.
An eight year cooling trend while CO2 is rising could be important. But we still won’t know of the planet will be warmer or cooler in 100 years.
.

Reply to  Richard Greene
July 21, 2022 6:05 pm

You need to stop talking into the mirror, no one here is claiming what YOU suggest which is why you seem to be running in circles.

Reply to  Richard Greene
July 22, 2022 6:57 am

No one is suggesting using the eight year trend for forecasting. It is just an indicator that the past forecasts have a problem. It needs to be investigated to determine WHY the eight year pause is happening when the current climate models don’t show it *or* the recent 18 year pause. Something is wrong with the models – as you apparently agree with. You *do* need to consider present data with more weight than data from the past. Just ask any farmer.

July 21, 2022 10:46 am

global temperatures have since remained quite warm

the last eight years have been unusually warm

Where is this spot that has been unusually warm for eight years?
I would like to plan an annual holiday there to get away from a miserable “quite warm” Dublin.

AlanJ
Reply to  Michael in Dublin
July 21, 2022 1:15 pm

The past few years have been much warmer than the historical mean, despite the absence of a statistically significant warming trend:

comment image

Tom Abbott
Reply to  AlanJ
July 22, 2022 3:56 am

There’s a good book about Hockey Stick charts advertised on this webpage. You should read it.

AlanJ
Reply to  Tom Abbott
July 22, 2022 4:03 am

This isn’t a “hockey stick chart,” it is the instrumental surface temperature record.

ResourceGuy
July 21, 2022 10:46 am

In the previous pause it took time to build up the alarmist narrative and get the science community to respond and issue statements. In the current pause, the dismissive response is pre-packaged without the science community.

CD in Wisconsin
July 21, 2022 10:46 am

“……with global temperatures exceeding 1.2C above pre-industrial levels.”

***************

Ignoring the Little Ice Age seems to be an ongoing and favorite tactic among the CAGW alarmists. It is far too inconvenient to admit we’ve been coming out of one since the 19th Century. What is the Earth’s climate supposed to do while coming out of the LIA?

Thought control and propagandizing like this is absolutely vital to maintaining the CAGW narrative as a smokescreen for the power and money to drive political and eco-activist agendas in environmentalism and politics. The alarmists are going to have a real problem on their hands if and when the Earth’s climate goes into a serious cooling period in the future.

Reply to  CD in Wisconsin
July 21, 2022 11:35 am

But that will still be Climate Change (TM).

Derg
Reply to  Retired_Engineer_Jim
July 21, 2022 12:03 pm

Hahaha..global warming was terrible branding. It Didn’t sound scary enough. I think Climate Extinction is Covid type scary.

Reply to  CD in Wisconsin
July 21, 2022 11:37 am

There is no real time global average temperature to prove a global Little Ice Age. There were a few weather stations in Europe and the US and not much more.

Climate reconstructions are local and not accurate enough to determine whether a century averaged one degree cooler or warmer than the prior century.

We know climate always changes.

We don’t know if the Little Ice age was a global climate trend.

We know Central England is considerably warmer now than in the mid 1600s.

We don’t know how much the global average ocean temperatures changed since then.

The Holocene Climate Optimum from 5000 to 9000 years ago was probably global, but we can’t be sure. The local climate reconstructions show warmth that probably exceeds their margins of error for that period, but there’s no proof for the 5,000 years since then, that the average temperature had ever been warmer than it was in the past 10 years.

MarkW
Reply to  Richard Greene
July 21, 2022 11:49 am

Hundreds of studies from around the world indicate that existence of both the LIA and the MWP.
BTW, using your standard, nothing prior to the start of the satellite record in 1979, can be used for determining the temperature of the planet.

Derg
Reply to  MarkW
July 21, 2022 12:06 pm

Remember to clowns like BigOil, the final nail , Griff, Simon….MWP only happened in the town of medieval….here in NY, they were freezing their @sses off 😉

Carlo, Monte
Reply to  Derg
July 21, 2022 12:40 pm

So now today we have “tipping points” left, right, and center.

Reply to  MarkW
July 21, 2022 1:22 pm

They are local climate reconstructions
Averaging them to create “global average” tends to make variations quite small relative to the likely margin of error.

We know the planet is always warming or cooling but do not have accurate global average temperatures before the 20th century. So what?

All those claimed warming and cooling periods were caused by 100% natural causes of climate change. They do not answer the question about the effect of manmade CO2 emissions. They do show the IPCC should not dismiss all natural causes of climate change as “noise”.

Carlo, Monte
Reply to  Richard Greene
July 21, 2022 3:30 pm

Averaging them to create “global average” tends to make variations quite small relative to the likely margin of error.

Averaging does not reduce measurement uncertainty, and measurement uncertainty is not error.

Reply to  Carlo, Monte
July 21, 2022 5:48 pm

Averaging local reconstructions to create a “global” temperature has tended to reduce
variations to about +/- 1 degree C. The locations selected for local climate reconstructions were not selected with the intention of having good global coverage.

AlanJ
Reply to  Carlo, Monte
July 22, 2022 5:47 am

If the mean is taken to represent a common tendency of the population, about which random errors exist in the individual members, then averaging will indeed reduce the uncertainty of an estimate of this common tendency. In fact the uncertainty in the mean is inversely proportional to the sample size.

Reply to  AlanJ
July 22, 2022 7:15 am

Alan,

Please! Not this statistical garbage again. The badly named “uncertainty of the mean” is actually not an uncertainty at all. It is merely the standard deviation of the sample means. It has nothing to do with accuracy and uncertainty. It is more a measure of how well the samples represent the entire population.

Doing this type of calculation also requires that the population be identically distributed around a true value, e.g. something like a Gaussian distribution or a uniform distribution.

Multiple temperature measurements of different things using different measurement devices is not guaranteed to be either a gaussian distribution or a uniform distribution. As such using standard deviation and average to describe the distribution is simply not correct. It should be described using the 5-number description – minimum, first quartile, median, third quartile, and maximum. Just like you would do with any skewed of multi-nodal distribution.

The uncertainty of any “average” you calculate from such a distribution will inherit the uncertainty propagated from the individual elements in the distribution. Not the “average uncertainty” of the elements but the *total* uncertainty of the elements when added together.

Almost all climate scientists ignore the uncertainties of their measurements, whether out of ignorance or convenience is hard to determine. I can’t do that. It was trained into me early by my mechanic father, by subsequent trade work, and finally by my engineering experience.

AlanJ
Reply to  Tim Gorman
July 22, 2022 9:18 am

When you take the mean of a sample, you want to know how certain you can be that the calculated mean represents the true mean of the full population, which you cannot measure (in this case it would be every single point on the earth’s surface). That is the uncertainty in the mean. We are not supposing that the temperature measurements themselves form a Gaussian distribution but that the means of those measurements will form a Gaussian distribution, and that this distribution converges as the number of samples grows. The Central Limit Theorum states that the distribution of the sample means will be normal whether the underlying population is normally distributed or not.

Reply to  AlanJ
July 22, 2022 11:52 am

Your explanation does not go far enough. In order to achieve a Gaussian distribution of the samples means, one must first achieve a cohesive set of samples where the samples sizes are large enough, there are enough of them, and that the samples themselves have the same distribution as the population. If these are not properly met, the sample mean will have a large standard deviation. In other words, the sample mean will have a large interval surrounding it where the population mean could be.

Climate science wants to declare that the data points are samples. Then turn around and find the SEM, that is, the Standard Error of the sample Means, as if the samples are the entire population. The equation relating these is:

SEM = population SD / √N

Where the SEM is the standard deviation of the sample means. It is not the Standard Deviation of the population.

Yet they have declared them as samples, so they are acutally dividing the SEM by √N which gives them a worthless number.

This is really noticeable if you look at the samples. Winter and summer months have a different variance of temperature. Yet the samples are segregated by summer and winter, i.e., Northern Hemisphere and Southern Hemisphere. These samples can not have the same variance and standard deviation. Yet they are treated like they have the same distribution as the entire population.

Lastly, I would point out that none of this addresses the uncertainty of the measurements. For example, I have 10 samples where each is made up of measurements with an uncertainty. Then the mean of each of the sample will also include the measurement uncertainty. The measurement uncertainty must be factored into the uncertainty of the estimated population mean. The GUM refers to this as the “combined uncertainty”.

AlanJ
Reply to  Jim Gorman
July 22, 2022 12:56 pm

When the population standard deviation is unknown then the standard error of the mean is estimated by using the sample standard deviation as a point estimate. It is in fact quite common not to know the standard deviation of the population, and to use the sample SD to estimate 95% confidence intervals.

When computing climatological means the anomaly is used rather than the absolute temperature, and station anomalies are gridded before being taken as a regional or global average, so seasonal variability across hemisphere is of little concern.

Reply to  AlanJ
July 23, 2022 6:36 am

When the population standard deviation is unknown then the standard error of the mean is estimated by using the sample standard deviation as a point estimate. “

The sample standard deviation IS the Standard Error of the sample Mean! They are one and the same.

You can then estimate the population Standard Deviation by the formula I showed before.

Population SD = SEM * √N

You DO NOT calculate an SEM of the sample mean by dividing the SEM again by the √N. That is what climate science does. It is why they end up with a ridiculously small SEM.

AlanJ
Reply to  Jim Gorman
July 23, 2022 8:14 am

No one is doing that. If you want to know the estimated standard error of the sample mean without knowing the population standard deviation it is simply given as the sample standard deviation divided by the square root of the sample size.

Reply to  AlanJ
July 22, 2022 3:53 pm

When you take the mean of a sample, you want to know how certain you can be that the calculated mean represents the true mean of the full population”

Again, if you do not have a Gaussian distribution (or at least an identically distributed population around a mean) in the population then a calculation of a mean is useless and meaningless.

Multiple measurements of different things simply will not provide an identically distributed population distribution. Thus standard deviation and mean are useless descriptors of the population.

We are not supposing that the temperature measurements themselves form a Gaussian distribution but that the means of those measurements will form a Gaussian distribution, and that this distribution converges as the number of samples grows.”

Think about what you just said here. 1. the distribution may not be Gaussian. 2. the distribution converges to a mean as the number of samples grows.

Again, if you do not have a Gaussian distribution then convergence to a mean is useless. You *must* use the 5-number description of the distribution and that does not include a mean.

Suppose you take a set of temps from the SH and a set from the NH in the same month. For the SH you get (20F, 25F, and 30F). For the NH you get (70F, 75F, and 80F).

If you jam those together what do you get? You get a bi-modal distribution in which the average tells you nothing. It simply can’t give you an expectation for what the next temperature will be! That’s because the average temp simply doesn’t exist in reality!

You can take as many SH temps as you want and as many NH temps as you want, jam them together and you will *still* get an average value that doesn’t exist in reality or in the distribution.

The Central Limit Theorum”

The central limit theory *ONLY* works with a Gaussian distribution. And even then it can only tell you how precisely you have calculated a mean from the sample means, it can’t tell you whether that mean is a true value or not. It can be as inaccurate as all git out! That’s because you leave out the uncertainty of the data elements when you do the sample. It’s the same thing Bellman does all the time. You can’t eliminate uncertainty by ignoring it.

Those temps above should be quoted as 20F +/- 1F, 25F +/- 1F, and 30F +/- 1F. The NH temps would be 70F +/- 1F, 75F +/- 1F, and 80F +/- 1F. Where in your samples and in the central limit theory are all those +/- 1F carried through? The CLT can’t eliminate them or make them smaller. Where do they appear?

AlanJ
Reply to  Tim Gorman
July 23, 2022 6:43 am

Think about what you just said here. 1. the distribution may not be Gaussian. 2. the distribution converges to a mean as the number of samples grows.

No, I said the sample means will converge to a Gaussian distribution per the central limit theorem. This is how we can assess the uncertainty of the mean even for non-normally distributed populations. I’m concerned that you don’t understand what the central limit theorem is.

Suppose you take a set of temps from the SH and a set from the NH in the same month. For the SH you get (20F, 25F, and 30F). For the NH you get (70F, 75F, and 80F).

The anomaly is used to calculate mean temperatures and for global datasets the station anomalies are gridded before averaging. Your point is quite irrelevant in regards to global surface temperature records.

The central limit theory *ONLY* works with a Gaussian distribution. 

It certainly does work for any population distribution. From Wikipedia:

“In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions.”

A side note: are there two Gormans I am speaking to or is there a single Gorman who goes by two names?

Reply to  MarkW
July 21, 2022 5:43 pm

Claims that some centuries were +- degree C. colder, or + 1 degree warmer, than other centuries, are likely within the margin of error of the LOCAL climate reconstructions.

Tom Abbott
Reply to  MarkW
July 22, 2022 4:02 am

“using your standard, nothing prior to the start of the satellite record in 1979, can be used for determining the temperature of the planet.”

That’s what I was thinking, too. Rather limiting. Rather dismissive of all those people down through the years who regularly recorded the temperatures in their area, for posterity.

aussiecol
Reply to  Richard Greene
July 21, 2022 2:57 pm

Richard, the little ice age lasted for hundreds of years. It would be extremely unlikely to be a localised event for that period of time.

Reply to  aussiecol
July 21, 2022 5:50 pm

I agree, but the data do not conclusively prove that.
70% of the surface is oceans
Did they have a Little Ice Age too?
How about in the Southern Hemisphere?

July 21, 2022 10:53 am

Cripes, it’s difficult to keep up with what that ‘rebuttal’ is trying to say…
First we read this…

“the last eight years have been unusually warm – even warmer than expected”

Then we get this…

“There is no evidence that the past eight years were in any way unusual”

He doesn’t actually seem to rebut what Lord Monckton points out either. Monckton points out that the globe hasn’t heated over the last 7 years or so, and the ‘rebuttal’ simply says this isn’t unexpected, so he actually seems to agree.

Oh, Monckton doesn’t ‘claim’ there’s been no warming over the last 7 years or so, he’s simply making an observation.

The professor seems to be going out of his way to avoid using certain words.
E.g… “a few less hot years”
Why not just say ‘cooler’?

I wonder if alarmists like the professor get lessons on what language to use, what words to avoid, that sort of thing.

Crispin Pemberton-Pigott
Reply to  Chris Nisbet
July 21, 2022 12:11 pm

No need for lessons. They are smart people. They know full well they are lying by omission. Given the amount of funding involved, that is also something “not unexpected”.

Mr.
Reply to  Chris Nisbet
July 21, 2022 1:08 pm

Of course alarmists are well practised in emotive language usage.

I studied and wrote marketing & advertising copy writing for many years.

Persuasive words and phrases can make a significant difference in the response rates for a direct marketing campaign.

The discipline of direct marketing content development is write > test > tweak > retest > tweak > retest > again again again . . .

I’ve seen just 2 word changes in a 100-word ad lift response rates from 2.5% to 5.5%.

Words are very powerful emotion triggers.
Ask any snowflake.

ResourceGuy
July 21, 2022 10:55 am

The Climate Agenda Media Industrial Complex better package a lot more denial-of-thinking assaults because I don’t think a super El Nino is going to save them this time.

July 21, 2022 11:14 am

Eric, do you never get embarrassed publishing that GISP2 chart?

For well over a decade now it’s been pointed out, probably hundreds if not thousands of times, that the GIPS2 data stop around 1855, not 2000, as claimed (again) on the chart title.

So it misses out on the entire modern period of warming.

To Eric and ‘skeptics’ in general: is this really the best you can come up with?

Dave Fair
Reply to  TheFinalNail
July 21, 2022 11:36 am

Its meant to show the large variations in temperatures over time during the Holocene that were not driven by CO2. Dispute that, if you will.

MarkW
Reply to  Dave Fair
July 21, 2022 11:56 am

Since he has to start his whine with an easily disproven lie regarding what the chart shows, there is no reason to pay any attention to the rest of his nonsense.

MarkW
Reply to  Dave Fair
July 21, 2022 12:01 pm

The chart also shows that for a solid majority of the last 10,000 years temperatures have been warmer than today. Ttherefore the claims that a few tenths of a degree of further warming is both unprecedented and proof that bad things are going to happen, cannot be supported using actual data.

Reply to  Dave Fair
July 21, 2022 12:08 pm

The chart shows a period with 100% natural climate changes.
It does not show any effect from manmade CIO2 emissions after 1850.

Dave Fair
Reply to  Richard Greene
July 21, 2022 5:52 pm

So what?

Reply to  Dave Fair
July 22, 2022 8:03 am

The amount of manmade climate changes after 1850 are an important question that needs an answer.

Dave Fair
Reply to  Richard Greene
July 22, 2022 10:05 pm

Who can determine the amount of manmade climate changes after 1850? So far all I can see is rank speculation. Until natural processes are fully described it is all guesswork.

MarkW
Reply to  TheFinalNail
July 21, 2022 11:55 am

Do you ever get tired lying about the GISP2 chart. The last datapoint on the chart shows 95 years ago, and the chart itself was produced several years ago.

Anyone with even a 1st grade ability to read charts should be capable of recognizing that there is no claim that the data extends to 2000.

As regards to the modern warming, if you add it to the end of the chart, it still barely gets temperatures back to the levels of the MWP.

If that’s the best you’ve got, you really should just hang your head in shame.

Reply to  MarkW
July 21, 2022 1:53 pm

“The last datapoint on the chart shows 95 years ago, and the chart itself was produced several years ago.”
It doesn’t. The last data point is 95 years Before Present. The chart is from Easterbrook, who doesn’t seem to know what that means
“Because the “present” time changes, standard practice is to use 1 January 1950 as the commencement date (epoch) of the age scale.”

Even the WUWT reference page gets it right (after some argument):

comment image

The data ends, as Nail says, in 1855.

MarkW
Reply to  Nick Stokes
July 21, 2022 4:55 pm

Poor, poor, Nick. He has gone to a great length to refute a claim that I never made.
Not once did I claim that the data did not end in 1855.
What I disproved, and you provided support for that position, was the claim that the chart continued to the year 2000.

Reply to  MarkW
July 21, 2022 6:08 pm

Yeah, and he and others do it again and again because they like to eat a hearty meal of strawman soup.

Reply to  MarkW
July 21, 2022 6:20 pm

“Not once did I claim that the data did not end in 1855.”

Well, you said
“Do you ever get tired lying about the GISP2 chart. The last datapoint on the chart shows 95 years ago”

What exactly was TheNail’s lie supposed to be?

MarkW
Reply to  Nick Stokes
July 22, 2022 9:23 am

As I stated several times, it’s his claim that the chart continues to the year 2000.

Are you really this desperate to change the subject?

Derg
Reply to  TheFinalNail
July 21, 2022 12:07 pm

Hey, there you are buddy. Do you think Mann was a fraud?

Matt G
Reply to  TheFinalNail
July 21, 2022 1:14 pm

The GIPS2 was drilled in 1993 and the most recent data coverage from it, was summer 1950.

A.D., where 0 represents summer, 1950.

DATA:	

Depth (m)	Age (yrs BP 1950)

1.510	-39
2.040	-38
2.820	-37
3.320	-36
3.990	-35

https://www.ncei.noaa.gov/access/paleo-search/study/17835

Reply to  Matt G
July 21, 2022 2:56 pm

“the most recent data coverage from it, was summer 1950.”
Actually, it is 1989. But where is the temperature data? All that does is enumerate the layers of ice they drilled.

The temperature data is here. It starts 95 BP, ie 1855.

At least your file does verify that 0BP is 1950AD

Matt G
Reply to  Matt G
July 21, 2022 4:08 pm

A further explanation is required with my link above indicating that the latest data from GIPS2 is not 1950 BUT actually 1988/9.

DATA: 				

Year	Age	Depth	Annual Layer Thickness	Comments
(A.D.)	(yrs BP 1950)	(m)	(m)	
1988	-38	0.42	0.42	
1987	-37	0.95	0.54	
1986	-36	1.73	0.78	

GIPS2 also including GIPS2 B above has negative years that indicate periods after 1950. The data available has the most recent year -39 cal yr BP that represents 1989. The above shows -38 cal yr BP showing 1988.

Resource Description (data set id):noaa-icecore-17835
Data Coverage:Latitude: 72.6
Longitude: -38.5
Minimum Elevation: 3200 m
Maximum Elevation: 3200 m
Earliest Year: 163164 cal yr BP (-161214 CE)
Most Recent Year: -39 cal yr BP (1989 CE)

“This archived Paleoclimatology Study is available from the NOAA National Centers for Environmental Information (NCEI), under the World Data Service (WDS) for Paleoclimatology. The associated NCEI study type is Ice Core. The data include parameters of ice cores with a geographic location of Greenland. The time period coverage is from 163164 to -39 in calendar years before present (BP). See metadata information for parameter and study location details. Please cite this study when using the data.”

Is this a updated data set to the illustrated graph?

Reply to  Matt G
July 21, 2022 4:31 pm

Again, your file shows the dating of the ice. But it does not show temperature data. That ends in 1855.

Matt G
Reply to  Nick Stokes
July 21, 2022 4:56 pm

Agreed, there is no temperature data that I can find from there. The data from R Alley according to him ends at 1855, but has he done it right? Where is the later data from the study? It seems strange that the time periods and length don’t match referenced work.

Reply to  Matt G
July 21, 2022 5:33 pm

has he done it right?”
Alley is the authority people are quoting here. His file that I linked is the source for the top graph. If you don’t have that, you have nothing.

Matt G
Reply to  Nick Stokes
July 22, 2022 3:04 am

I know, but scientists can make mistakes and sometimes they need checking epecially when data doesn’t seem to match the timelines referenced in my link.

Tom Abbott
Reply to  TheFinalNail
July 22, 2022 4:12 am

Why are you complaining? The chart has the bastardized Hockey Stick chart tacked right on to the end of it. I would think you would like that. But I guess your complaint is really that the chart shows it was warmer in the past than it is today, even with the bogus Hockey Stick chart tacked onto the end.

jeff corbin
July 21, 2022 11:16 am

“Global warming” was the popular vernacular in the media prior to the “The Pause” after the deep minimum of the 23rd cycle which ended 2009-2010. Soon after, there was a terminology shift from “Global Warming” to “Climate Change” in the popular media. In the “global warming” days the risk was related to increasing temperatures. Since 2010, anthropomorphic climate change has become the center of the hermeneutic for interpreting all extreme weather events and anything else wrong with world. The movement departed it’s weak science and launched it self, fully leveraged by people who have no common cause with truth, into a full fledged propaganda campaign of revolution. The movement is impervious to the facts of of logic, because they know what the problem is… it’s us! If we are the problem, then there is a reason to control us, disenfranchise us, (the global masses) because the threat is ever imminent and pervasive. It was always a political movement with a bit of science, now it is a revolution.

Reply to  jeff corbin
July 21, 2022 11:41 am

It’s worse than you think. First it was global warming; then it was anthropogenic global warming; then it was catastrophic global warming; then it became Climate Change; then it became Catastrophic Climate Change. A few other catchy names were tried (Climate Weirding), but it settled back to Climate Change. It’s as easy as talking about carbon pollution (evil black stuff) rather than talking about carbon-dioxide.

July 21, 2022 11:21 am

Rough estimates (climate reconstructions) of pre-1850 local climates in the era of nearly 100% natural causes of climate change, can not be used to explain the long term effect of a manmade +50% CO2 increase since 1850.

Dave Fair
Reply to  Richard Greene
July 21, 2022 11:41 am

UN IPCC CliSciFi climate models can not be used to explain the long term effect of +50% CO2 increase since 1850. Hell, they can’t even get the past correctly.

Derg
Reply to  Richard Greene
July 21, 2022 12:08 pm

Long term effect of what?

Crispin Pemberton-Pigott
Reply to  Richard Greene
July 21, 2022 12:19 pm

How do you know all that increase in CO2 concentration is “manmade”? I am interested to see your sources. Usually scientific discretion allows no claims for atmospheric influence until after WWII, 100 years after you say humanity was influential. Even now. Only half (at most) of human-sourced emissions accumulate in the atmosphere. From 170 to 70 years ago the total emitted was far below today’s CO2 disappearances. Worse for your argument, since 1950 the regrowth of the Eastern Forest of the USA has been absorbing 100% down to 80% of the CO2 emitted by the USA. Personally, I see the oceans as the primary source and sink for CO2. I repeat: “source”.

Reply to  Crispin Pemberton-Pigott
July 21, 2022 1:30 pm

Humans added far more CO2 than remained in the atmosphere — an estimated 50% increase of the CO2 level since 1850
That means nature is absorbing some of the manmade CO3 emissions. Nature is a net CO2 absorber, not a net CO2 emitter. The CO2 increase is 100% man made.

Proof of manmade CO2 in atmosphere:

“14C an ideal tracer of carbon dioxide coming from the combustion of fossil fuels. Scientists can use 14C measurements to determine how much 14CO2 has been diluted with 14C-free CO2 in air samples, and from this can calculate what proportion of the carbon dioxide in the sample comes from fossil fuels.”

“Most fossil fuels, like oil and coal, which are ancient plant and animal material, have the same δ13C isotopic fingerprint as other plants. The annual trend–the overall decrease in atmospheric δ13C–is explained by the addition of carbon dioxide to the atmosphere that must come from the terrestrial biosphere and/or fossil fuels. In fact, we know from Δ14C measurements, inventories, and other sources, that this decrease is from fossil fuel emissions”

SOURCE OF QUOTE:
https://gml.noaa.gov/outreach/isotopes/c13tellsus.html

Greenhouse effect:
Measurement of downwelling infrared radiation:

“Carbon dioxide is the second most important greenhouse gas (behind water vapor) so increasing its concentration gradually results in a stronger greenhouse effect. That means more downwelling infrared radiation (IR) being emitted toward Earth, … ”

impeding Earth’s ability to cool itself (aka ‘a global warming’).

Many measurement methodologies are discussed at the link below:

SOURCE OF QUOTE:
https://www.sciencedirect.com/science/article/pii/S2666017221000158

rbabcock
July 21, 2022 11:22 am

If Greenland is the bellwether, the trend is toward cold. We are at the bottom of the melt curve and from here on out, melting will slow and turn to accumulation around 4 to 5 weeks from now. If you look at where we stand compared to averages not a lot of melting has happened to date, and if you look at the snow forecast for the next week over the island, not a lot of melting will happen short term.

So at NASA’s estimate of 350GT = 1mm of sea level, it looks like Greenland all by itself will take out 1.25mm this season (Sept – Aug). Throw in the Southern Hemisphere has been really cold this winter season including high snowfall totals, things are going to get dicey here.

July 21, 2022 11:33 am

However, after a decade or so of slower-than-average warming, rapid temperature rise returned in 2015-16 and global temperatures have since remained quite warm. The last eight years are the warmest eight years since records began in the mid-1800s.

never forget Hansen predicted “business-as-usual” temps would rise by around 2 degrees by now, as opposed to the half-degree we actually got

July 21, 2022 11:38 am

Fact check the fact check of the fact check. That is the problem. No. The problem is that accepted “science” requires demonstrable proof not model projections (from models known to not represent the empirical real data. Showing demonstrable cause and effect is hard, requiring a lot of data and a lot of effort. And unbiased researchers not researchers on the payroll.

Duane
July 21, 2022 11:55 am

So all that carbon “goes into the oceans” – so whaddya spose happens to all that carbon in the oceans? Does it heat the oceans?

Hmm, what do all those trillions of tons of carbonate-based sea life do, both the living critters and their long dead ancestors? Oh, now I remember, they form coral reefs and coral-based “sands” that cover much of the sea bottom and islands in the tropical zones. And others get eaten by animals higher up the food chain .. while still gazillions of tons (about 5% of the total mass of the Earth’s crust) of that carbon get turned into limestone.

July 21, 2022 11:57 am

I’ve pointed this out before, even if the pauses/slowdowns are temporary, each one is causing the secular trend to continue diverging from the models. Each 10 or 15 year pause just means that the temperature would have to surge impossibly fast just to catch up to the predictions. So we have a really slow rebound from the LIA that neither agriculture nor the biosphere will have any trouble with.

Carlo, Monte
July 21, 2022 12:16 pm

The trendologists won’t like this one, prepare for more whining.

Editor
July 21, 2022 12:29 pm

The Global Average Surface Temperature of Earth still has not risen to the expected Earth-like Temperature of 15°C. We are still at 14.9xxxx maybe.

AlanJ
July 21, 2022 12:52 pm

If one is careful enough in the selection of one’s time periods, one could say the entire observed modern warming period is nothing more than a series of pauses and coolings.

comment image

Quite remarkable. This present pause is surely going to be the warmest of all the pauses until the next one.

Bob boder
Reply to  AlanJ
July 21, 2022 1:25 pm

Take your chart back 6000 years and you will see this warming period is just one of many in a continuing downward trend, an identical pattern found in all interglacial periods during this ice age.

AlanJ
Reply to  Bob boder
July 21, 2022 1:34 pm

It doesn’t look like the modern warming follows any pattern seen during the Holocene:

comment image

Reply to  AlanJ
July 21, 2022 6:11 pm

No link for the chart is it a Marcott chart?

Your post is worthless without the background and the link.

AlanJ
Reply to  Sunsettommy
July 22, 2022 2:50 am

The reconstruction is from Kaufman, et al., 2020

Reply to  AlanJ
July 22, 2022 8:11 pm

What a crappy paper that was…..

It removes the LIA and the MWP completely.

AlanJ
Reply to  Sunsettommy
July 23, 2022 6:45 am

So it’s a crappy paper because it tells you things you don’t like hearing? Lol

Reply to  AlanJ
July 23, 2022 6:56 am

No because it eliminates the well documented MWP and LIA existence which you ignored.

AlanJ
Reply to  Sunsettommy
July 23, 2022 7:21 am

Rather it demonstrates, as if we needed more demonstration, that these were not globally synchronous events. The MWP did exist at different times for different parts of the world but it never had significant global expression. Such a phenomenon has never been well documented.

Reply to  AlanJ
July 23, 2022 9:46 am

No one claims it was synchronous and it was indeed all over the world within a 350-year period and you are LYING since it is well documented with hundreds of published papers.

Notice you are silent about the well-known LIA phase because you are quickly running out of steam in your lies about long-established climate phases.

That is why I don’t take papers that abolished well known climate phases of the MWP and LIA seriously as they are LYING.

AlanJ
Reply to  Sunsettommy
July 23, 2022 3:55 pm

If the MWP was not synchronous then the globe as a whole was never experiencing warmth comparable to today’s. Case closed.

The notion that the MWP was ubiquitously accepted by all scientists as having been a period of global warmth greater than today before multiproxy reconstructions were available is simply a lie. It was long debated exactly what the timing and extent of Medieval warmth was, and our global multiproxy reconstructions simply confirmed what many scientists had long suspected. There was no great “undoing” of a paradigm. The evidence we ever had before these reconstructions was sparse and uncertain.

Reply to  AlanJ
July 23, 2022 4:46 pm

I have yet to see you counter anything I posted, just more assertions while I have access to many published papers showing the existence of warming all over the world within the time frame MWP existed:

Medieval Warm Period Project

The dishonest use of the word synchronous to the MWP while ignoring that today there isn’t any like it either.

Reply to  AlanJ
July 22, 2022 10:05 am

I like how the global temperature was magically flat for millennia. It’s magic, I tell you…

July 21, 2022 1:07 pm

Hausfather makes the claim for the graph of the Berkley Earth temp data:
“this chart highlights a distinct acceleration in the rate of surface temperature change after the 1970s”
I checked some of the numbers on the chart and decided to have a bit of fun.
I took two 40 year periods:
1904 to 1944 (Before our CO2 emissions were supposed to be making much of a difference) and 1981-2021 (end of 70s to present day, when our emissions are supposed to be burning the earth to a crisp)

1904 to 1944
-0.20 deg – 0.51 deg
temp change = 0.71 deg

1981 to 2021
0.67 deg – 1.21 deg
temp change = 0.54 deg

If Zeke wants to pick the 1970s (handily low temps) as his starting point to prove we are accelerating towards thermageddon, I can pick the starting points of other periods to demonstrate the opposite. In other words, the whole eco-zealotry is just cherry picked rubbish.

BTW before you all pile on, don’t get too carried away with the method behind my calculations – it was just for a bit of a giggle.

MGC
July 21, 2022 1:31 pm

Sorry to be so blunt, but what a shameful disgrace of an article.

The article opens with that tired old, long dismissed misrepresentation – the Easterbrook Greenland ice core graph – which is improperly labelled and disingenuously tries to make it look like it includes the contemporary instrumental record, when it really doesn’t.

Easterbrook’s wrong (again)http://hot-topic.co.nz/easterbrooks-wrong-again/

We then go on from there to be treated to a plethora of other typical “skeptical” echo chamber talking point misrepresentations and outright falsehoods. For example:

1- it [warming] disappeared into the ocean depths, allegedly.

There is nothing “allegedly” about ocean warming. Measurements from thousands of Argo ocean floats worldwide clearly confirm a continuing trend of significant heat accumulation in the oceans.

https://www.ncei.noaa.gov/access/global-ocean-heat-content/

It can only be a matter of time until the next major El Nino releases that heat to the atmosphere, at which time the latest surface warming “pause” will end, and the folly of the “skeptics” who want to try to pretend away the anthropogenic warming trend with yammering about “pauses” will (yet again) be exposed, just like last time.

2- “like the medieval warm period, Roman Warm Period and Minoan Warm Period, which look suspiciously like our current modern warm period”

Uh, no, those periods look nothing like our current warming. Research demonstrates that the current warming is far, far more global in scope than any of those periods:

Neukom, et al Nature 2019

“Here we use global palaeoclimate reconstructions for the past 2,000 years, and find no evidence for preindustrial globally coherent cold and warm epochs. In contrast, we find that the warmest period of the past two millennia occurred during the twentieth century for more than 98 per cent of the globe. This provides strong evidence that anthropogenic global warming is not only unparalleled in terms of absolute temperatures, but also unprecedented in spatial consistency within the context of the past 2,000 years.”

3- “I’m pretty sure we could figure out how to cope with the warming.”

Yes, we will ultimately figure out how to “cope”. The real question, though, is this: what will “coping” cost? Most economic analyses conclude that in the long run, taking little if any action to limit warming is likely the costliest and the riskiest alternative.

Reply to  MGC
July 21, 2022 2:27 pm

The Argo floats have a +/- 0.5C uncertainty. That’s wider than the difference trying to be identified. So how do we know the deep ocean is warming? In any case the Argo floats only go down (I think) to about 2000ft. The *deep* ocean is far deeper than that.

The paleoclimate reconstructions many times just totally ignore the social clues that are available. 2000 years ago the Romans were all over Europe. It wasn’t till later that the climate turned colder and they drew in. The Mongols didn’t expand throughout Asia during a *cold* period but during a warm period. Native Americans 2000 years ago were more populous than in later periods. Same with the Mayans. These peoples would not have developed and expanded during cold climates. I’m sure there are lots of other examples.

MGC
Reply to  Tim Gorman
July 21, 2022 5:01 pm

Here’s Gorman once again pretending that the Argo measurements are “too uncertain” when of course this is totally not the case. And the Argo floats go down to 2000 m, not 2000 ft.

re: “The paleoclimate reconstructions many times just totally ignore the social clues, blah blah blah … “

And how comically ironic is it to hear Gorman first whine about measurement uncertainty, but then mention “social clues” as if they could somehow provide “better” measurement uncertainty than actual temperature proxies. Such a notion is, of course, pure nonsense. Another laughably ridiculous Gormanian “skeptical” excuse.

Reply to  MGC
July 21, 2022 5:31 pm

Here’s Gorman once again pretending that the Argo measurements are “too uncertain” when of course this is totally not the case.”

Sorry bud, it *IS* the case. The resolution of the sensor is something like +/- 0.001C.

But the sensor is not the float. The calibration of the sensor is dependent on the rate of water flow past the sensor, the ph of the water, and probably several other things I have forgotten. When the Argo floats were initially calibrated after being in the field the FLOATS were found to have an uncertainty of +/- 0.5C. Of course that is within the tolerance the Federal Handbook of Meteorology No. 1 specifies for temperature measuring devices – +/- 0.6C.

Sensor resolution is *NOT* measurement device uncertainty. That is true for *any* measuring device, be it a digital voltmeter or a thermometer.

“then mention “social clues” as if they could somehow provide “better” measurement uncertainty”

And now you are doing what you usually wind up doing when you are shown to be wrong. You put words in people’s mouth to create a strawman and then argue against the strawman.

The use of social clues is only a true/false indicator concerning whether past history was warm or cold. You can use those to validate reconstructions. For example, just how warm did it need to be for the Vikings to grow crops and raise livestock on Greenland? If a reconstruction says Greenland has always been too cold for that then the reconstruction has a problem!

You are your own worst enemy on here. Are you surprised that no one believes anything you assert any more?

MGC
Reply to  Tim Gorman
July 22, 2022 5:21 am

Same tired old Gormanian nonsense, over and over and over again. The uncertainty of the mean value of 4000 measurements is far smaller than the uncertainty of any single measurement. Continuing to pretend otherwise remains nothing but the epitome of willful ignorance.

re: “The use of social clues is only a true/false indicator”

Therefore the temperature proxies themselves are a far, far, far, far, FAR better indicator, which was my point. But because of ideological bias, you blindly ignore what those proxies tell us and want to trust the say so of an indicator that gives some totally vague hot or cold true/false signal instead.

Unbelievably ridiculous. But what else is new. Unbelievably ridiculous is business-as-usual for a Gorman.

Reply to  MGC
July 22, 2022 8:15 am

The crux is his obsessive use of “uncertainty” as a magical, unknowable parameter. He describes it’s distribution when he amazingly claims that it is “off” by a certain amount, and that there’s no possibility of intermediate points where it could land. But he balks at using even that distribution quantification when pressed. Finally, both J and T retreat into misty tales about their “real world experiences” when pressed with the facts.

For me, he channels my retired union electrician BIL. He drank on the job for his 30+ years of union protected employment at our local utility. Now, after a few, his fav stories are about how he saved the asses of “Those college boys” at work. Fueled no doubt by the fact that his 3 boys wanted to be engineers but were, you know, “High Hopes, Low SAT’s”.

rah
Reply to  bigoilbob
July 22, 2022 11:10 am

LOL! So your saying that all the adjustments to weather station records by NOAA are bull shit. Thank you. But your really late to the party. We already knew that.

Reply to  rah
July 22, 2022 11:28 am

Wut? WTF did THAT come from?

Reply to  bigoilbob
July 22, 2022 4:20 pm

The crux is his obsessive use of “uncertainty” as a magical, unknowable parameter. “

You just keep showing how little you understand of this. The uncertainty interval is not a magical, unknowable parameter. The TRUE VALUE is what is unknowable. It exists somewhere in the interval but it is UNKNOWN where! The whole purpose of the uncertainty interval is to show that you simply can’t ever have a perfect measurement!

“He describes it’s distribution when he amazingly claims that it is “off” by a certain amount, and that there’s no possibility of intermediate points where it could land.”

Again, more BS from an idiot. The true value has a probability of 1 of being the true value. All the other values in the interval have a 0 probability of being the true value. There can be only ONE true value. You just don’t know what it is!

“Finally, both J and T retreat into misty tales about their “real world experiences” when pressed with the facts.”

Not a single real-world example, based on experience, was ever refuted by you. And now you have just retreated to using the argumentative fallacy of Argument by Dismissal as a refutation.

And then you have to resort to ad hominems because you have nothing to actually offer in refutation of anything. Typical.

Carlo, Monte
Reply to  Tim Gorman
July 23, 2022 6:48 am

blob is another who is confuzzled about how uncertainty is not error.

MarkW
Reply to  bigoilbob
July 22, 2022 8:47 pm

Uncertainty is unknowable when you can’t define what all the possible errors are.
Your belief that data you can decide what the answer should be, then manipulate the data until it gives you that answer is unscientific, but par for the course in climate alarmism.

Carlo, Monte
Reply to  bigoilbob
July 23, 2022 6:42 am

The crux is his obsessive use of “uncertainty” as a magical, unknowable parameter.

blob weighs in and shows how clueless his heat-addled brain is.

Reply to  MGC
July 22, 2022 10:52 am

Tell me something. Have you ever held a job where the measurements you take and their uncertainty (think tolerances) made the difference in whether you kept your job or not? Think machinist or tool and die maker.

Have you ever had a job where your measurements had to meet legal requirements for accuracy, precision, and uncertainty? Have you performed engineering tasks that required a Professional Engineer to sign off on? Have you ever worked in a Certified Laboratory?

What was the job(s)?

If you have never had to meet forecasts, payroll, and other regulated requirements based upon your measurements then you simply have no room to criticize anyone. You won’t even use your name to identify yourself. That’s a good indication of just how much your criticism is worth.

Reply to  Jim Gorman
July 22, 2022 11:34 am

We’ve been thru this particular “real world experience” BS whine over and over, ad nauseum. There’s NOTHING about NCM operation that disagrees with centuries old statistical laws.

BTW, I AM a professional engineer. Precision, accuracy, uncertainty are our stock in trade. And NONE of these parameters is inconsonant with the Engineering Statistics 101 rules of the road that you seem to be hysterically blind to.

Reply to  bigoilbob
July 22, 2022 4:38 pm

*MY* engineering training, BSEE, power and nuclear, emphasized the need to consider uncertainty in *everything*!

If you were truly a professional engineer then you would be far more experienced with uncertainty. When building infrastructure you *have* to consider uncertainties, be they in ordering fish plates to join struts, ordering conduit between service panels, or even in designing rise/tread in stairwells.

My guess is that you really don’t even know what molding in a house is for!

Reply to  bigoilbob
July 23, 2022 5:46 am

If you are a professional engineer, what is your degree and from what university?

What courses have you had in metrology that dealt with measurement uncertainty? What textbook did you use?

Here is a photo of my degree. Let’s see yours!

Please note, I do not hide behind an anonymous nomenclature. I am proud of my accomplishments and have no problem with folks knowing about me.

my degree.jpg
Reply to  Jim Gorman
July 23, 2022 7:30 am

If you are a professional engineer, what is your degree and from what university?”

BS, Petroleum Engineering, Missouri School of Mines, 1981. (Admittedly, only slightly higher rated than yours.)
MS, Petroleum Engineering, University of Southern California (Drilling Emphasis), 1995.

But a degree doesn’t make you a “Professional Engineer.” Search for Oklahoma Professional Engineer #14428. 1985, by both references and examination (unlike Texas). But this surrenders my full name, so please honor my nom de WUWT here. BTW, where’s yours?

Reply to  bigoilbob
July 24, 2022 5:43 am

No, but being a Professional Engineer means you are responsible for the final approval of items that DO REQUIRE proper uncertainty calculations for life and safety purposes.

You should have intimate knowledge of measurement uncertainty, yet you have indicated little ability to understand how it applies to measurements of different things with different devices. I can only assume you have had little to no training in metrology at all.

I was lucky to have worked in the old Bell Telephone system and received much training developed by Deming and Shewhart at Bell Telephone Laboratory in Statistical Process Control (SPC). Part of this training was learning about uncertainty in measurement and how that can affect the end products meeting nominal specifications. They would be appalled at the lack of disciplined statistical analysis in the treatment of measurement data in climate science.

MGC
Reply to  Jim Gorman
July 23, 2022 7:46 am

Well, that explains a lot. The University of Kansas is not even in the top 100 engineering schools in the country, LOL.

The academic training of all those scientists and engineers that Gorman ridiculously pretends are “wrong” is orders of magnitude better.

Reply to  MGC
July 24, 2022 5:58 am

Just FYI, we had professors who wrote textbooks used in a number of different universities and they consulted in industry. Two of my professors worked at Bell Labs in the development of tunnel diodes. How about you. Have your professors had this kind of experience?

Funny how you can’t or won’t show your education since it seems so important to you. BTW, KU had one of the few nuclear reactors ran by a university when I went to school. Don’t denigrate what you know nothing about!

And, somehow, the mathematics required haven’t changed a whole lot over the years. Maxwell’s EM equations, Planck’s heat radiation, and thermodynamics all still rely on the same tried and true EXPERIMENTALLY derived mathematics. Can you say the same about General Circulation Model’s?

Reply to  MGC
July 24, 2022 9:10 am

Well, that explains a lot. The University of Kansas is not even in the top 100 engineering schools in the country, LOL.”

This is the argumentative fallacy known as Poisoning the Well. Why am I not surprised to see you using it.

KU was a leading research location for Satellite Remote Sensing in the 70’s and 80’s. They even had their own nuclear research building. Many engineering schools were not this advanced.

Carlo, Monte
Reply to  bigoilbob
July 23, 2022 6:49 am

bluff and bluster keeps blob inflated.

Reply to  MGC
July 22, 2022 4:08 pm

Same tired old Gormanian nonsense, over and over and over again. The uncertainty of the mean value of 4000 measurements is far smaller than the uncertainty of any single measurement. Continuing to pretend otherwise remains nothing but the epitome of willful ignorance.”

Nope. The uncertainty of that mean is *NOT* an uncertainty. It is actually the standard deviation of the sample means. If those sample means are inaccurate then so is the mean calculated from them. The standard deviation of the sample means is meaningless when it comes to accuracy.

Uncertainty does *NOT* cancel unless you have a Gaussian distribution and multiple measurements of different things using different devices simply can’t be assumed in such a situation.

If you have a distribution of temperatures like (20F +/- 1F, 21F +/- .5F, 19F +/- 2F, 18F +/- 1F, 70F +/- .6F, 71F +/- .8F, 80F +/- 1.1F, 77F +/- 1.2F, 16F +/- .9F) and you pull samples from that population what happens to the uncertainty of the mean you calculate from those stated values?

You can assert that all those uncertainties of the individual temps will cancel but they won’t. The mean you calculate won’t even exist in reality. It will be a meaningless number.

Like all statisticians you have been trained to only look at stated values of a distribution which implies that all the stated values are 100% certain. In other words no uncertainty at all in the stated values.

It just doesn’t work that way in the real world!

Carlo, Monte
Reply to  Tim Gorman
July 23, 2022 6:51 am

It is really unfortunate someone in the misty past came up with the term “standard error” and it has stuck.

Reply to  Carlo, Monte
July 23, 2022 2:03 pm

Would you prefer “probable error”?

Reply to  Carlo, Monte
July 24, 2022 8:34 am

I’m sure you just hate those inconvenient, well defined, standard statistical terms.

https://www.investopedia.com/ask/answers/042415/what-difference-between-standard-error-means-and-standard-deviation.asp

Reply to  bigoilbob
July 24, 2022 10:33 am

“I’m sure you just hate those inconvenient, well defined, standard statistical terms.”

From your link:

Standard error of the mean (SEM) measures how far the sample mean (average) of the data is likely to be from the true population mean. “

If your data elements have random, independent uncertainty intervals then how do you know the “true population mean”? The uncertainties of the individual data elements propagates onto the mean – so your mean should be quoted as “stated value +/- total uncertainty”

SEM is only accurate if there is no uncertainty in the population mean. And this is what statisticians like you and the rest of the climate “clique” do – IGNORE THE UNCERTAINTY OF THE POPULATION MEAN. Just erase it. Pretend like there is no uncertainty in the measurements!

Reply to  Tim Gorman
July 24, 2022 10:53 am

“If your data elements have random, independent uncertainty intervals then how do you know the “true population mean”?”

OMFG. What a load of self serving, undocumented BS. To say that a data set, even with both distributed x and y data, has no “true population mean”, or no true trend for that matter, is total RME material. Yes, they will have increased, easily calculated, standard deviations/standard errors, but their expected values are indeed “true”.

“SEM is only accurate if there is no uncertainty in the population mean. And this is what statisticians like you and the rest of the climate “clique” do – IGNORE THE UNCERTAINTY OF THE POPULATION MEAN.”

Er, no. Might want to look back at yesterdays comment by me to Mr. Carlo. I did just that with available data that I linked to. As climate scientists do very day…..

Reply to  bigoilbob
July 24, 2022 3:34 pm

OMFG. What a load of self serving, undocumented BS. To say that a data set, even with both distributed x and y data, has no “true population mean”, or no true trend for that matter, is total RME material. Yes, they will have increased, easily calculated, standard deviations/standard errors, but their expected values are indeed “true”.”

In other words you don’t understand anything about non-Gaussian distributions at all, let alone distributions whose data elements have uncertainty.

If you do not have a at least an identically distributed distribution then the mean and standard deviation is *NOT* the proper statistical description of the distribution.

Why is this so hard to understand? In this case you *have* to use the 5-number description: minimum, first quartile, median, third quartile, and maximum. You might even have to use a different method of statistically describing the distribution.

Suppose you have a distribution of (0,0,1,2,63,61,27,13).

Is this a Gaussian distribution? You would say that the mean is 21, and the standard deviation is 25. Exactly what do you think that tells you about the distribution? Does the mean even exist? Can you use it for anything?

Now, give each an uncertainty: 0 +/- 0.5, 0 +/- 0.5, 1 +/- 0.25, 2 +/- 0.75, 13 +/- 0.5, 27 +/- 1, 61 +/- 0.25, and 63 +/- 0.25.

What is the uncertainty of the mean value = 21? Is it zero? No uncertainty at all? Can you just ignore the uncertainties like you always want to do?

Er, no. Might want to look back at yesterdays comment by me to Mr. Carlo. I did just that with available data that I linked to. As climate scientists do very day…..”

All you are doing is demonstrating you have a Statistics 101 understanding of metrology, nothing more.

Reply to  Tim Gorman
July 24, 2022 7:02 pm

You would say that the mean is 21, and the standard deviation is 25. Exactly what do you think that tells you about the distribution?”

It tells me that the parameters are distributed. For example, it could be the initial production rates of a step out drilling program. And since future drilling might be expected to have the same distribution of outcomes, the IP rates would indeed tend towards gaussian. Hint: No gaussian data is still valuable.

“Does the mean even exist?”

Yes,

“Can you use it for anything?”

If you have reason to believe that the data is representative, or can be projected in any way, you can use it for future decision making.

“Now, give each an uncertainty: 0 +/- 0.5, 0 +/- 0.5, 1 +/- 0.25, 2 +/- 0.75, 13 +/- 0.5, 27 +/- 1, 61 +/- 0.25, and 63 +/- 0.25.
What is the uncertainty of the mean value = 21?

To 4 significant figures I am getting an increase from the “certain” data that was 25.27 to 25.28. I.e, none.

“Is it zero?”

No.

“No uncertainty at all?”

See above.

“Can you just ignore the uncertainties like you always want to do?”

What mean, “you”, Kemosabie? I’m the one considering them, quantitatively….

Reply to  bigoilbob
July 25, 2022 7:47 am

It tells me that the parameters are distributed.”

Oh, brother! In other words it tells you nothing!

“For example, it could be the initial production rates of a step out drilling program. And since future drilling might be expected to have the same distribution of outcomes, the IP rates would indeed tend towards gaussian. Hint: No gaussian data is still valuable.”

It could also be southern hemisphere data mixed with northern hemisphere data! So tell me what the standard deviation of mean implies! Not some hand-waving about “well, it could happen again”.

“Yes,”

If those are temperatures then where does the mean exist in reality? You can calculate a mean value but what does it tell you about reality?

If you have reason to believe that the data is representative, or can be projected in any way, you can use it for future decision making.”

If those are temperatures then what kind of decision making can be done from them?

To 4 significant figures I am getting an increase from the “certain” data that was 25.27 to 25.28. I.e, none.”

Meaning you have absolutely *NO* idea of how to handle uncertainty! The mean value is 21. What does 25.27 and 25.28 have to do with anything? I assume you meant 21.27 and 21.28. I am also assuming you think the uncertainty of is 0.01. In addition, the significant figure size here is two, not four.

if q_avg = x(total)/n then you get 21 for the average using significant figures.

The uncertainty of the average is:

u(q)/q = u(x1)x1 + u(x2)/x2 + … + u(x8)/x8 + u(n)/n

I picked bad stated values when I chose zero since you can’t calculate the relative uncertainty with a value of zero, so we’ll just shift everything by adding a 1 to the stated values.

1 ± 0.5, 1 ± 0.5, 2 ± 0.25, 3 ± 0.75, 14 ± 0.5, 28 ± 1, 62 ± 0.25, 64 ± 0.25

So:

u(q_avg)/q_avg = 0.5/1 + 0.5/1 + 0.25/2 + 0.75/3 + 0.5/14 + 1/28 + .25/62 + .25/64 = 1.5

u(q_avg)/q_avg = 1.5

I’ll leave it to you to do the root-sum-square method (hint: should be about 0.8). u(q_avg) = 21 * 0.8 = 17

So your average (mean) will be 21 +/- 17. In other words your average value is worthless. It would range from 4 to 38!

AVERAGE UNCERTAINTY IS *NOT* UNCERTAINTY OF THE AVERAGE.

Uncertainty of the average here is +/- 17. It is *NOT* +/- 0.19.

If this distribution was a sample of a larger population it wouldn’t matter. If you have five samples and their means are:

20 +/- 1
19 +/- 2
18 +/- 0.5
17 +/- 0.75
19 +/- 0.4

The standard deviation of the sample means is 1 and the mean is 18.6. But that is not the uncertainty of that 18.6 mean.The uncertainty of the mean (using root-sum-square) is +/- 0.13.

So the mean of the sample means should be given as 18.6 +/- 0.1.

In this example the uncertainty worked out to be less than the standard deviation of the sample means. Many times, however, this is *not* the case. The average of the sample means will remain the propagated uncertainty from the sample means, not the standard deviation of the sample means.

What mean, “you”, Kemosabie? I’m the one considering them, quantitatively….”

Nope. You don’t even know how to propagate uncertainty! So how can you be considering them?

Reply to  Tim Gorman
July 25, 2022 8:36 am

“So your average (mean) will be 21 +/- 17. In other words your average value is worthless. It would range from 4 to 38!”

Your arithmetic is from Bizarro world NO idea where these values came from. But stepping back, while my (correct) standard deviation of the average is even higher than yours, what makes it “worthless”? You seem hysterically blinded to the fact that highly distributed data can still be valuable. We oilfield trash would be kicking rocks otherwise.

What does 25.27 and 25.28 have to do with anything?”

It represents the change in the standard deviation of the average, by including the uncertainties in the individual data points. None, in practice.

“I am also assuming you think the uncertainty of is 0.01. In addition, the significant figure size here is two, not four.”.

No, it is the increase in it, from including the uncertainty of the data. I used 4 sig figs as a bone throw to you. I wanted to show you that the standard deviation of the average did change. Just by very, very, very, little.

The standard deviation of the sample means is 1 and the mean is 18.6. But that is not the uncertainty of that 18.6 mean.The uncertainty of the mean (using root-sum-square) is +/- 0.13.
So the mean of the sample means should be given as 18.6 +/- 0.1.”

Lost In Space as usual. The “mean of the sample”, including evaluation of the individual data uncertainties is 18.6 +/- 1.4. Yes, up some from 18.6 +/- 1.0. This is because the uncertainties in the individual data points are relatively larger, compared to the expected value sum of variance, for this set of values, compared to your first.

Again get thee to a community college. Audit Engineering Statistics 101, transferable to an actual engineering school later. All it would cost is a few audit fees, gas, time (it’s obvious that you’re not busy), and a used book. The scales would then fall from your eyes, and like Rush, if he had been able to complete his 4th try at drug rehab, you would have your epiphany…

Reply to  bigoilbob
July 25, 2022 9:56 am

Your arithmetic is from Bizarro world NO idea where these values came from.”

THEY ARE RIGHT THERE IN THE POST!

The relative uncertainty is 0.8. 0.8 x 21 = 17!

You are much like bellman. Neither of you can do basic algebra! And you are a professional engineer?

It represents the change in the standard deviation of the average, by including the uncertainties in the individual data points. None, in practice.”

The change in standard deviation is *NOT* the uncertainty! Just like bellman all you know how to do is ignore actual measurement uncertainty!

“No, it is the increase in it, from including the uncertainty of the data”

Standard deviation is *NOT* measurement uncertainty.
The change in standard deviation is not measurement uncertainty. Where in Pete’s name did you learn this concept?

tg: “The standard deviation of the sample means is 1 and the mean is 18.6. But that is not the uncertainty of that 18.6 mean.The uncertainty of the mean (using root-sum-square) is +/- 0.13.

So the mean of the sample means should be given as 18.6 +/- 0.1.”

Lost In Space as usual. The “mean of the sample”, including evaluation of the individual data uncertainties is 18.6 +/- 1.4. Yes, up some from 18.6 +/- 1.0.”

The mean of what sample? I gave you the means of FIVE samples and calculated the standard deviation of the stated values of those sample means. I then calculated the root-sum-square value of the uncertainties. I have no idea where you came up with +/- 1.4!

Again get thee to a community college. Audit Engineering Statistics 101, transferable to an actual engineering school later. All it would cost is a few audit fees, gas, time (it’s obvious that you’re not busy), and a used book. The scales would then fall from your eyes, and like Rush, if he had been able to complete his 4th try at drug rehab, you would have your epiphany…”

The only one here that needs training seems to be you. You should have to study under the grad student teaching my EE Lab 101. The lab where eight of our students each built our separate amplifiers, took one measurement of each, and averaged the eight results together to get our final answer on the lab. WE ALL FAILED THAT EXERCISE!

And I’m sure you have not got even the faintest of ideas as to why!

MGC
Reply to  Tim Gorman
July 23, 2022 7:39 am

As already stated more than once now, if Gorman’s “too much uncertainty” claims were really true, then the published measurement values would be varying all over the map from month to month. The simple fact that they don’t refutes entirely Gorman’s handwaving nonsense.

Gorman’s “too much uncertainty” claims are every bit as comically ridiculous as claiming that a gun doesn’t have the accuracy to hit a target, even after a shooter just used it to hit the bull’s eye five times in a row.

What a joke. And these so-called “skeptics” like Gorman still wonder why they are not taken seriously by the scientific community. SMH in disbelief.

Reply to  MGC
July 24, 2022 3:38 pm

Gorman’s “too much uncertainty” claims are every bit as comically ridiculous as claiming that a gun doesn’t have the accuracy to hit a target, even after a shooter just used it to hit the bull’s eye five times in a row.”

You just keep on demonstrating your ignorance. A bullseye five times in the row demonstrates accuracy AND precision. That is small uncertainty. But unless those five are a one-hole result then thee is *still* some uncertainty in where the next shot will hit. You cannot estimate where that will be by ignoring the uncertainty. My guess is that you can’t even enumerate all of the uncertainty factors in such an attempt!

MGC
Reply to  Tim Gorman
July 25, 2022 10:20 am

This Gormanian nonsense just grows ever more and more ridiculous and ever more tiresome.

Of course there is still some uncertainty in where the next shot will hit. Duh. But, as demonstrated by the data from the previous shots, all hitting the bull’s eye, that uncertainty is quite small. Nor is that small uncertainty “ignored” as Gorman wants to try to pretend.

The uncertainty of the mean ocean heat content data is similar. The values do not vary all over the place from month to month as Gorman’s “too much uncertainty” claims would require in order to be “correct”.

Gorman can handwave, stomp his feet, and pretend away as much as he wants, but the actual month by month data itself nevertheless totally refutes his utterly ridiculous “too much uncertainty” claims.

Reply to  MGC
July 25, 2022 12:22 pm

Of course there is still some uncertainty in where the next shot will hit.”

How do you calculate it for the future when you ignore it in the data you have?

“But, as demonstrated by the data from the previous shots, all hitting the bull’s eye, that uncertainty is quite small.”

Being small doesn’t mean you can ignore it – unless it is you, bellman, or a climate scientist!

The uncertainty of the mean ocean heat content data is similar. The values do not vary all over the place from month to month as Gorman’s “too much uncertainty” claims would require in order to be “correct”.” (bolding mine, tg)

Ahhh…. And now we get into the argumentative fallacy of Equivocation – change the definition of what we are discussing. When did the subject become JUST THE OCEAN TEMPERATURE?

The ocean temp can vary from something like -2C to 30C. And they *do* vary from month to month although the variation is not uncertainty. The uncertainty is in the measurement, not the stated value. Remember, measurements should be given as “stated value +/- uncertainty”. We are speaking of the uncertainty part of the measurement, not the time-related variation in the stated value!

but the actual month by month data itself nevertheless totally refutes his utterly ridiculous “too much uncertainty” claims.”

No, it doesn’t. You STILL confuse variation in the stated value with the uncertainty associated with the measurement of the stated value. You can whine and cry all you want about me but you *do* deny measurement uncertainty exists, just like bellman and the climate scientists!

MGC
Reply to  Tim Gorman
July 25, 2022 3:52 pm

Gorman’s comments have once again become way beyond ridiculous. But what else is new.

No one is “ignoring” uncertainty in the shooting example. Gorman imagining that this is the case is an utter absurdity. But what else is new.

re: “You confuse variation in the stated value with the uncertainty associated with the measurement of the stated value.”

False. But what else is new. The changes of the mean ocean heat content from month to month consist of some variation in the stated value and some uncertainty with the measurement of the stated value.

One can always put a worst case upper bound on the uncertainty of the measurements by assuming that all of the month to month changes are due to measurement variation.

The fact that these variations are tightly distributed relative to the magnitude of the decades long trend, leading to a highly statistically significant p-value < 0.0001 for the increasing ocean heat content trend, totally refutes Gorman’s utterly laughable “too much uncertainty” grasping at straws falsehoods.

MGC
Reply to  Tim Gorman
July 25, 2022 4:18 pm

An even more comical exposition of Gorman’s utterly ludicrous nonsense is this:

Gorman makes believe that researchers are not following the correct statistical methods as laid out in his sacred Taylor textbook; however, if one bothers to read the publications by the researchers who have actually done the ocean heat content analysis, one finds that the methods they have used are in fact taken right out of the Taylor textbook and are referenced as such.

One can surmise that Gorman has, of course, never done any such reading of the actual research, preferring to remain instead in his shameful cesspool of willful ignorance.

“Hoist by your own petard” is once again the phrase that comes to mind here.

Reply to  Tim Gorman
July 24, 2022 8:23 am

Uncertainty does *NOT* cancel unless you have a Gaussian distribution and multiple measurements of different things using different devices simply can’t be assumed in such a situation.”

Documentation, please. But thanks for accidentally admitting that uncertainties are distributed.

W.r.t. SPC. It is an application of the same statistical laws that have been evolved for centuries. There is no daylight between it and the statistical evaluations you decry, fact free, in climate science..

Reply to  bigoilbob
July 24, 2022 10:27 am

Documentation? You mean you can’t look at the temperature record and see that it is not a Gaussian distribution?

Start with Pielke, 2007, “Documentation of Uncertainties ….”

See also Hubbard, Lin, 2007, “On the USCRN Temperature System”

Hubbard and Lin have several studies on temperature measurement uncertainty. One notes that the micro-climate below the measurement station even affects it’s readings, e.g. fescue grass vs kentucky bluegrass, sand vs bare earth, etc. It concludes that any adjustment factor for a station must be done on an individual station basis, not on a grid basis.

W.r.t. SPC. It is an application of the same statistical laws that have been evolved for centuries.”

Derived by statisticians who never once use a data set where each element is shown as a “stated value +/- uncertainty interval”. Only as stated value, I..e the uncertainty interval is assumed to be zero or to always cancel.

Here is the common definition for uncertainty in SPC:

Measurement uncertainty: in simplistic terms in dimensional metrology, it can be said to be “A non-negative parameter characterising the dispersion of the values attributed to a measured quantity”. This potential uncertainty has a probabilistic basis that reflects incomplete knowledge of the measured quantity. Here, all measurements are subject to degree of uncertainty, and a measured value is only complete if it is accompanied by a “Statement of the associated uncertainty”. Relative uncertainty is the term obtained from the actual measurement uncertainty divided by the measured value.”

Note carefully that it speaks to a “measured quantity”, i.e. a single object being measured. NOT multiple measured quantities of different objects.

When you measure maximum and minimum temperature you have measured TWO DIFFERENT THINGS, one time each. Each measurement will have an uncertainty interval that is independent of the other. One measurement cannot give you a normal distribution or *any* kind of distribution that can be used to cancel uncertainty for either measurement. And the independent uncertainty of one measurement cannot cancel the independent uncertainty of the other measurement. You might get *partial* cancellation but you need to be able to show that this is the case rather than just assuming it.

Perhaps an example will help explain this.

You pull a sample plate with drilled holes from process1 at time1 and measure the diameter of the holes. You do the same thing for process2 at time1.

Now you come back 24 hours later and do the same thing at time2.

The diameter of the holes from process1 will have changed (can you guess why and what the effect is?).

The diameter of the holes in process2 will have changed as well.

Will the change in process1 tell you what the change in process2 is?

Ans: NO. They are independent objects being acted upon by independent processes. The uncertainty of one is independent of the other. You can’t assume that you can adjust the equipment in process2 by the same amount as for process1. NO CANCELLATION.

It’s the same for min/max temps. You can’t assume that the uncertainties in each form a random distribution that cancels.

I simply cannot understand how any engineer being held responsible for results in the field can’t understand this.

Reply to  Tim Gorman
July 24, 2022 12:56 pm

Once again, you’re providing a fairy tale instead of actual documentation. But to answer your irrelevant question, drill bits wear.
 
And also once again, all of the processes used in SPC are drawn from classical statistical theory.
 
It’s the same for min/max temps. You can’t assume that the uncertainties in each form a random distribution that cancels.”
 
Which is why we don’t. With increasing data, the uncertainties of the averages and trends are minimized. Depending on the mix of distributions for distributed input data, and the standard errors of their expected values, maybe or maybe not per the rule that the sum of the variance equals the variance of the sum. But always minimized.
 
AGAIN, your claim that statistical laws are not, your responsibility to tell us when.

Reply to  bigoilbob
July 24, 2022 3:44 pm

Once again, you’re providing a fairy tale instead of actual documentation. But to answer your irrelevant question, drill bits wear.”

In other words you are too illiterate to even find the documentation I gave you on the internet. Why am I not surprised?

And also once again, all of the processes used in SPC are drawn from classical statistical theory.”

And SPC assumes you are measuring the same thing multiple times. You can’t even address the situation where you are measuring multiple things one time – WHICH IS WHAT TEMP MEASUREMENTS ARE!

Why can’t you address how to handle temperature measurements that have uncertainty?

” With increasing data, the uncertainties of the averages and trends are minimized.”

Only if they are multiple measurements of the same thing! Why can’t you get outside of your narrow box where you assume that all uncertainty cancels?

All you have to do is just take minimum and maximum temperatures. Measurements of different things and show how the uncertainties of each cancel when you calculate their average. Show us explicitly how the uncertainties cancel. Address both random effects and systematic effects.

Put up or shut up!

Carlo, Monte
Reply to  MGC
July 23, 2022 6:41 am

The uncertainty of the mean value of 4000 measurements is far smaller than the uncertainty of any single measurement.

Another climate astrologer talking through his hat and showing his lack of clothing.

Reply to  Carlo, Monte
July 24, 2022 3:46 pm

He can’t even show explicitly how the uncertainties of a daily minimum and maximum temperature cancel. And yet he expects that to happen with any number of temperature measurements.

It’s an article of faith he learned in Statistics 101 at college, perhaps even Statistics 101 for Business majors!

MarkW
Reply to  Tim Gorman
July 21, 2022 5:07 pm

To the accuracy limitations of the probes themselves, you have to add the accuracy limitations cause by a grossly inadequate number of probes.

Reply to  MarkW
July 22, 2022 11:38 am

You don’t “add” them, but you do consider them. They add to the sum of variance in averaging and trending evaluations. Thankfully, even the most overinflated estimates of old timey measurement error, when included in regional or global temperature or sea level trends, over physically/statistically significant time periods, make very little difference to the standard errors of those trends.

Carlo, Monte
Reply to  bigoilbob
July 23, 2022 6:52 am

“consider them” — HAHAHAHAHAHAHAHAHAH

Keep painting yourself into the corner blob, it is hilarious.

Reply to  Carlo, Monte
July 23, 2022 4:48 pm

“consider them” — HAHAHAHAHAHAHAHAHAH”

I told you exactly how you would consider them. The sum off the variance is a part of both the evaluations of the standard deviation of an average and of the standard error of a trend. The sum of the variance of the distributed data point errors is simply added to that of either that of the “expected value” averaging or trending evaluation (depending on what you are doing) and the required parameter is then calculated.

Don’t believe me? Download this data.

https://www.epa.gov/climate-indicators/climate-change-indicators-sea-level

Now, from 1980-2013, for the pre sat data, calculate the acceleration, and it’s standard error. You should get:

0.00586 +/- 0.000661 in*yr^2

Now, do the same thing including the provided annual standard deviations. You get:

0.00586 +/-0.000871 in*yr^2

You have just increased your chance that there is either no or negative acceleration from

4.07E-19 to a whopping

5.65E-12

Tell ’em what they won, Monty.

But of course you have NO idea how to do any of this, do ya’…

Reply to  bigoilbob
July 23, 2022 5:50 pm

Correction:

yr^2 s/b yr^-2

A killer error, and I regret my carelessness. Especially when responding to someone as sharp as Mr. Carlo

comment image.

At “Mines” schools, units rule. And so they should. Almost anyone with fundamental engineering skills who is not Pat Frank recognizes this.

Reply to  bigoilbob
July 24, 2022 9:05 am

The sum off the variance is a part of both the evaluations of the standard deviation of an average and of the standard error of a trend.”

This *ONLY* applies to a distribution that is normal or at least identically distributed around a mean.

You have yet to explain how measurements of different things like temperature using different measurement devices can result in a normal distribution of both the stated values and the uncertainty.

Until you can do this nothing you say means anything.

Time to put up or shut up!

Show how maximum or minimum temperatures across just the US will result in a normal distribution or admit you have absolutely no understanding of metrology and uncertainty.

Reply to  Tim Gorman
July 24, 2022 9:16 am

This *ONLY* applies to a distribution that is normal or at least identically distributed around a mean.”

Ah, the motorized goal posts. Instead of trying in vain to back up your fact free assertion, you just modify it slightly. Let’s cut to the chase. Since this is your vague, moveable claim, it’s up to you to document exactly which data sets containing which kinds of distributions are not part of the law that variance of the sum is the sum of the variance. FYI, I can start. Both for trending and averaging, correlated data has a smaller standard error and standard deviation than non correlated data.

Now, time to document….

Reply to  bigoilbob
July 30, 2022 11:29 am

This entire forum is about climate – i.e. temperatures.

The data sets are made up of individual, random, independent variables, that means individual measurements of different thing using different measurement devices. There is simply no way to guarantee a distribution that is normal or identically distributed around a mean.

There is *NO* moving of the goal posts. Standard deviation and mean are statisical descriptors that *only* apply to normal distributions. They don’t apply to skewed distributions. They don’t apply to multi-modal distribution. For those kind of distributions the use of the 5-number description should be used: minimum, first quartile, median, third quartile, maximum.

Since the temperature data sets are *NOT* normal distributions the use of the mean and the standard deviation is just a crutch used by those who simply want to ignore statistical standards.

it’s up to you to document exactly which data sets containing which kinds of distributions are not part of the law that variance of the sum is the sum of the variance.”

Do you even realize what you have said here? I doubt it. Since each individual, independent, random temperature in the data set has a variance denoted by its measurement uncertainty you are saying that the variance of sum is the sum of the variances of all the individual data set members, i.e. the sum of the measurement uncertainties.

That is *EXACTLY * what we’ve been trying to tell you from the start. You simply cannot ignore the measurement uncertainties. All those uncertainties add up when you combine them!

Thanks for finally recognizing reality!

Dave Andrews
Reply to  Tim Gorman
July 22, 2022 9:59 am

Plus each Argo float represents an area of ocean the size of Portugal. Would we take a single land temperature and say that represented all of Portugal?

Reply to  Dave Andrews
July 23, 2022 6:06 am

CAGW alarmist would!

MarkW
Reply to  MGC
July 21, 2022 5:04 pm

And here comes MGC to trot out all of the standard alarmist lies.

As to ocean temperatures, even 1000 ARGO probes is short by a factor of at least 100 of the number needed to come close to an accurate reading of the entire oceans temperature.
Beyond that, the claimed warming for the entire ocean is only about 0.003C. This with a woefully inadequate number of probes with an accuracy of only 0.5C out of the lab. After several years at sea, the accuracy has decreased by an unknowable amount.

As always, the trolls are absolutely incapable of comparing like to like. They proclaim that since world wide temperature recordings are only available in the modern era, this proves that prior to the modern era, it is impossible for there to be a worldwide climate phenomena. Regardless, every proxy record available shows the existence of the MWP, but since we don’t have proxies for every spot on the planet, the warming couldn’t have been world wide.

So far the coping has cost nothing. The warming we have seen so far is 100% beneficial, and will be for at least the next 2 or 3C of warming. The historical record proves that.

MGC
Reply to  MarkW
July 22, 2022 8:12 am

Here’s MarkW with another sadly typical torrent of “skeptical” falsehoods.

The claim “accuracy of only 0.5C out of the lab” is wildly incorrect.

Argo Data 1999–2019: Two Million Temperature-Salinity Profiles and Subsurface Velocity Observations From a Global Array of Profiling Floats
Frontiers in Marine Science Sept 2020

“The accuracies of the float data have been assessed by comparison with high-quality shipboard measurements, and are concluded to be 0.002°C for temperature, 2.4 dbar for pressure, and 0.01 PSS-78 for salinity.”

https://www.frontiersin.org/articles/10.3389/fmars.2020.00700/full

If there were genuinely the large degree of measurement uncertainty as these intentionally blind nay-saying “skeptics” claim, then the published mean ocean heat content values would be varying all over the place from month to month. The fact that they are not proves that these juvenile “too much uncertainty” ankle biting claims are, of course, completely false.

“warming for the entire ocean is only about 0.003C”

No surprise, totally wrong again. As usual. Too low by over a full order of magnitude.

MarkW
Reply to  MGC
July 22, 2022 9:25 am

Even carefully calibrated lab instruments in highly controlled environments have trouble measuring temperatures to 0.01C, much less 0.001C.
You really will believe any lie, so long as it supports your religious convictions.

Reply to  MGC
July 22, 2022 4:48 pm

Go look up Hadfield, 2007.

The mean RMS difference across the whole section for the Argo based estimate (using the original OI parameters) is 0.58°C,”

Far too many so-called “scientists” today mistake sensor resolution for uncertainty. It is the entire float that causes the uncertainty, not the sensor resolution.

MGC
Reply to  Tim Gorman
July 23, 2022 8:39 am

How comical.

Hadfield 2007 was published 15 years ago, at a time when the full deployment of the Argo flotilla had not yet even been achieved.

Typical Gormanian fail.

Hadfield 2007 was one of the first research studies to initially investigate how useful the Argo data would be once full deployment was achieved.

Gorman, of course, blindly ignores the conclusions of this initial exploratory study:

“sampling of the temperature field at the Argo resolution results in a level of uncertainty … that is sufficiently small that it should allow investigations of variability in this region, on these timescales.”

What part of “sufficiently small uncertainty” is Gorman unable to comprehend?

Another 15 years of operation of the full Argo flotilla, and the analysis of the full flotilla data, post Hadfield 2007, have only further enhanced those initial findings.

Gorman’s ankle biting nay-saying continues to be nothing but ludicrous nonsense. His babble remains every bit as ridiculous as claiming that a gun “doesn’t have the accuracy to hit a target” even after a shooter used it to hit the bull’s eye five times in a row. Pure willfully ignorant garbage.

MGC
Reply to  MarkW
July 22, 2022 9:53 am

Oh, by the way, here are a couple other MarkW misrepresentations and falsehoods:

“every proxy record available shows the existence of the MWP”

Very doubtful that this is actually true, but moreover, the proxies that do show a “MWP” show that it occurred at widely different times, sometimes centuries apart, at different places around the globe. But of course so called “skeptics” like MarkW just disingenuously ignore that information, because it doesn’t fit the nay-saying “skeptical” agenda.

The claim “the warming we have seen so far is 100% beneficial” is also ridiculously false. Below is just one example of many that could be cited:

Sea Level Rise Caused an Extra $8 Billion in Losses During Hurricane Sandy
https://e360.yale.edu/digest/sea-level-rise-caused-an-extra-8-billion-in-losses-during-hurricane-sandy

And there will be many, many more examples like this, and worse, in the coming decades, as the planet continues to warm and sea levels continue to rise.

Mr.
Reply to  MGC
July 21, 2022 5:32 pm

MGC, I told you once before Scooter that any global warming and sea level rise that will have any consequential effects on habitation will take decades / generations to have any real impacts.

If remedial actions are required to ports, infrastructure, housing etc, that will provide much-needed work activity and economic boost for the future working age generations.

Just as WW2 did for those 1950s & 60s generations with all the re-building that was required.

Next time, however, there won’t tragically be millions of people killed in wars to create a huge work-making opportunity. Global warming is a doddle to deal with compared to 1,000 plane bombing raids.

If the world is lucky, global warming will also open up huge new areas for agriculture to feed a hungry world.

MGC
Reply to  Mr.
July 22, 2022 8:23 am

Here’s Mr. ridiculously imagining that things like putting much of the state of Florida underwater, which is a highly likely eventual outcome under “business-as-usual” practices, would somehow represent an “economic boost”.

And one has to be truly delusional, if not downright sick, to imagine that calamities like heinous world war destruction are “lucky” circumstances for providing “much-needed work activity and economic boost for the future working age generations”.

Words cannot even begin to describe the disgust felt for Mr.’s truly contemptible comments.

MarkW
Reply to  MGC
July 22, 2022 9:27 am

So a 8 inch increase in sea level over the next 100 years is going to put Florida under water?

Is there any lie so ludicrous that you won’t support it?

MGC
Reply to  MarkW
July 22, 2022 10:59 am

And here we go with yet *another* round of utterly tragic MarkW falsehoods.

MarkW has been shown more than once before that current global sea level rise rate is around 3.5 mm/yr. (AVISO, NOAA, NASA, CSIRO) That’s well over a foot in the next 100 years, not just his false 8 inches claim.

The rise rate has also been increasing, though so-called “skeptics” like MarkW also pretend away (i.e. simply lie about) that information, too.

Moreover, the rise rate along the U.S. east coast is much greater than the worldwide average. For example, Boston, NYC, and Miami have been seeing rise rates of 5-7 mm/yr. this century (Permanent Service for Mean Sea Level). That’s around two feet per 100 years.

Lastly, this sea level rise won’t just magically vanish at the end of this century. It will continue for centuries more.

People who have a sense of integrity don’t leave a looming issue like that for centuries of future generations to deal with. But apparently MarkW is not one of those kinds of people.

MarkW
Reply to  MGC
July 22, 2022 8:53 pm

As usual, MGC only takes the data set that shows what he wants to believe. But even if his false claims were for once accurate, that still doesn’t work out to Florida being underwater.

What is it about alarmists and their need to believe any catastrophic forecast he’s told to believe.

MGC
Reply to  MarkW
July 23, 2022 9:15 am

“only takes the data set that shows what he wants to believe”

Such utterly ludicrous drivel. Four different global datasets were referenced. They all show the same thing.

“that still doesn’t work out to Florida being underwater.”

More woefully intentional “skeptical” ignorance.

Unified Sea Level Rise Projection Southeast Florida:

“In the short term, sea level rise is projected to be 10 to 17 inches by 2040 and 21 to 54 inches by 2070 (above the 2000 mean sea level in Key West, Florida). In the long term, sea level rise is projected to be 40 to 136 inches by 2120.”

William Wilson
July 21, 2022 1:40 pm

The Trick is now showing on Netflix. Subscription cancel? Required viewing.

July 21, 2022 1:42 pm

I’m confused.
Is the missing heat hiding in the ocean again or a tree ring?

Richard Page
Reply to  Gunga Din
July 21, 2022 3:34 pm

I thought it was supposed to be in pine cones?

July 21, 2022 1:45 pm

From the above so-called “fact check” by http://www.carbonbrief.org:
“Human-emitted greenhouse gases trap extra heat in the atmosphere. While some of this heat warms the Earth’s surface, the vast majority – around of 93% – goes into the oceans.”

So sophomoric those two back-to-back sentences . . . so wrong on both.

1) Greenhouse gasses do not “trap” heat energy in the atmosphere. Instead, they briefly intercept LWIR radiated off Earth’s surfaces and then promptly redistribute that “extra” energy to the rest of the gases in the atmosphere (predominately nitrogen and oxygen) which are then free to both convect to TOA and to radiate that energy directly to space. This is why there is a current balance (“equilibrium”) between Earth’s incoming solar energy and Earth’s outgoing radiation energy. If greenhouse gases continuously “trapped” heat energy, Earth’s land and surface temperatures would reach the boiling point of water.

2) There is no credible evidence that 93% of the supposedly trapped (see #1 above) incoming solar energy passes into Earth’s oceans. If it did, the oceans would exhibit a significantly higher warming trend over time than currently measured.
“Based on time series of global ocean heat content variability calculated from Argo temperature measurements in the 10-1500m depth layer. The average [ocean] global warming rate accounts for 0.54±0.1 Wm-2 during the years 2005-2010.” (source: https://climatedataguide.ucar.edu/climate-data/ocean-heat-content-10-1500m-depth-based-argo ).
But there is this:
“The 2 degrees Celsius global temperature increase limit translates to a radiant energy increase of 2.5 watts per square meter.” (source: https://www.nsf.gov/news/news_summ.jsp?cntn_id=116862 ). Over the last 50 years, Earth’s average global (land and ocean) temperature has increased by about 0.9 °C, so the ratio’d radiate energy increase would be about (0.9/2.0)*2.5 = 1.13 W/m^2. In comparing this to the preceding quoted Argo data, we see that only 0.54/1.13 = 0.48 = 48% of the “trapped” incoming solar energy passes into Earth’s oceans, NOT the 93% asserted by the carbonbrief.org folks. 

Eric Worrall summed-up the above far better than I can:
“Desperate butt covering from alarmists who are facing increasingly embarrassing questions about the failure of the world to end.”

Tom Abbott
Reply to  Gordon A. Dressler
July 22, 2022 4:46 am

One of the more effective methods to attack the alarmist argument is to point out that every dire prediction they make about the Earth’s weather has failed to materialize. They are wrong every time. Going by the polls, many people seem to have stopped listening to them as only about one to three percent of those polled recently give a high priority to climate change.

Fool me once, shame on you. Fool me twice, shame on me.

July 21, 2022 1:55 pm

“Except for that period between the 1940s to 1970s, when the drop in global temperature triggered climate scientists like Stephen Schneider to suggest we should use nuclear reactors to melt the polar ice, to prevent an ice age.”

He does not suggest we should in that video:

Can we do these things? Yes. But will they make things better? I’m not sure. We can’t predict with any certainty what’s happening to our own climatic future. How can we come and intervene then in that ignorance? You could melt the ice-caps; what would that do to the coastal cities? The cure could be worse than the disease. Would that be better or worse than the risk of an ice age?

Tom Abbott
Reply to  Bellman
July 22, 2022 5:02 am

“We can’t predict with any certainty what’s happening to our own climatic future. How can we come and intervene then in that ignorance?”

Nothing has changed since that was first said. How can we intervene in our current ignorance?

Fools rush in where Angels fear to tread.

We have a lot of fools in positions of power today and they are rushing in for various selfish or delusional reasons, causing much unnecessary pain for the rest of the world, directly or indirectly.

About the only cure for this CO2 delusion is a significant temperature downtrend. That is a distinct possiblity going by past temperature history.

Reply to  Tom Abbott
July 22, 2022 5:49 am

So you accept this article was being misleading when it said that Stephen Schneider was arguing we should melt the polar ice caps?

Your claim that nothing has changed over the last 45 years is not one I recognize.

Dan W
July 21, 2022 2:06 pm

There is then and there is now. The questions with regards to now are:
1) Is CO2 a green house gas? IOW, does a significant increase in the air make it hotter?
2) Has there actually been a significant increase in CO2? Measure it.
3) Does the increase from vehicle, industrial and energy match what we see in the air?
4) Does there being other causes for warming in the past somehow invalidate warming due to green house gases?

gbaikie
July 21, 2022 2:24 pm

Holocene is roughly following the pattern of recent interglacial period, though it’s different
in that it looks like Holocene has been significantly cooler.
Which might related to Earth being coldest it’s been in last tens of million of years, and climbing out this very cold period, had a hiccup. And the hiccup was so serious as one might question weather Holocene is actually an interglacial period. But we chose to call it an interglacial period despite sea levels rises only about 2 meters higher than present sea levels
and it’s thought the last interglacial was 5 to 9 meters higher than current sea levels.
And the ocean during the Eemian interglacial period was thought to have average ocean
temperature of 4 C or warmer.
And currently our ocean averages about 3.5 C.
The Holocene interglacial period has been gradually cooling, as all other interglacial period did. This gradual cooling of Holocene has been occuring for over 5000 years.
More than 5000 years ago, sea levels were higher, the Sahara Desert was mostly grassland and forests, and we had an ice free polar sea ice in arctic ocean. And having ice free polar sea allowed the great northern forests to have trees, which today are just frozen stumps.
Or our great northern forest was a larger forest. And dry Sahara Desert was in period called,
African humid period:https://en.wikipedia.org/wiki/African_humid_period

Stevek
July 21, 2022 2:40 pm

One good argument is to say AGW science does not follow the gold standard of science. The gold standard is the double blind study. With Covid we have heard about double blind studies of drugs to fight it. We have been told wait to the double blind study comes out before putting faith in a drug.

Yet with AGW hypothesis there is no double blind study simply because there is only one earth. There is then an hypocrisy with any scientist that puts full faith in AGW but also believes the double blind study is gold standard of science. Scientists can’t have it both ways. If they have full faith in AGW then logically they deny the gold standard of a double blind study.

Tom Abbott
July 21, 2022 3:02 pm

From the article: “However, after a decade or so of slower-than-average warming, rapid temperature rise returned in 2015-16 and global temperatures have since remained quite warm.”

The truth is the global temperatures have cooled quite a bit, down by 0.6C from the 2016, 21st century highpoint (the year 1998 was just as warm).

Here’s the evidence:

comment image

Alarmists should explain why more CO2 is going into the air every day, yet the global temperatures have cooled 0.6C. That doesn’t seem to fit the alarmist claim that more CO2 in the atmosphere means higher temperatures.

Tom Abbott
July 21, 2022 3:15 pm

From the article: “In fact, the last eight years have been unusually warm – even warmer than expected given the long-term rate of temperature increases – with global temperatures exceeding 1.2C above pre-industrial levels.”

What the author is talking about is the temperature highpoint of 2016, which NASA Climate claims was 1.2C warmer than their average, and NOAA claims was 1.1C warmer than their average.

Whichever figure you care to use, current global temperatures are 0.6C cooler than 2016.

If we keep coling like this latest cooling trend, the current slight warming trend is going to turn into a cooling trend. It will be fun seeing the alarmists try to explain that, if it happens.

The real temperature profile of the globe shows the temperatures warm for a few decades and then they cool for a few decades and then they warm again, all the while staying within certain bounds high and low. At least, this is the case since the Little Ice Age, going by the unmodified, written historical temperature record.

So the temperatures have warmed for a few decades during the satellite era (1979 to the present), have hit a high point, and now are cooling, perhaps beginning decades of cooling. Wouldn’t that be a kick in the head for the alarmists.

I guess I ought to put this in here for perspective:

comment image

spren
July 21, 2022 3:15 pm

Down-welling IR energy can only penetrate the ocean surfaces to the thickness of a dime. UV penetrates the surface to a depth of several hundred feet before it is converted into IR. How exactly does the atmosphere warm the oceans when it is clear that oceans transport stored heat from the sun and transfer that heat into the atmosphere. Freaking charlatan liars.

damp
July 21, 2022 3:18 pm

The problem (for the Chicken Littles) is not the Pauses, but the holy computer models’ utter failure to predict the Pauses. They should not be allowed to bait-and-switch like this.

July 21, 2022 3:19 pm

Attacked is the temperature in the tropic from climate4you GlobalTemperatures. Is the temperature hiatus in the tropics from 1998 to now?

Tropic temperature.png
Cosmic
July 21, 2022 3:48 pm

Does not matter with Marxists in charge. None of what you wrote matters 1 iota. To you, me and scientifically adept folks like you and the rest of us, of course it matters but not to marxist socialist commies of the democrat party. Not 1 single bit.

MarkW2
July 21, 2022 4:00 pm

The point is that if the models really were accurate, these pauses would show up. But they don’t. Any idiot can build a model with two increasing variables and any statistician who knows their stuff will tell you that such a relationship is meaningless.

This is the real reason the pause matters because it provides a great way to test the models; and they clearly fail the test.

H B
July 21, 2022 4:17 pm

“Three million years ago, the world was so warm Antarctica was mostly ice free”
Was it the arctic may have been but Antarctica ?

Bob
July 21, 2022 4:59 pm

Can someone help me with the graph? Why do the numbers on the right side of the graph get larger as they go down or are those dashes minus signs. If they are minus signs what would zero represent?

July 21, 2022 5:08 pm

Notice that the Minoan, Roman, Medieval and the present time are all more or less equally spaced, lending evidence to the theory that today’s warmth is entirely natural.

I like to post this on facebook pages pushing Al Gore’s climate scam:
There is NO CLIMATE CRISIS!

THE CLIMATE HAS ALWAYS CHANGED!

5000 years ago, there was the Egyptian 1st Unified Kingdom warm period  
4400 years ago, there was the Egyptian old kingdom warm period.
3000 years ago, there was the Minoan Warm period. It was warmer than now WITHOUT fossil fuels.
Then 1000 years later, there was the Roman warm period. It was warmer than now WITHOUT fossil fuels.
Then 1000 years later, there was the Medieval warm period. It was warmer than now WITHOUT fossil fuels. 1000 years later, came our current warm period. 

You are claiming that whatever caused those earlier warm periods suddenly quit causing warm periods, only to be replaced by man’s CO2 emission, perfectly in time for the cycle of warmth every 1000 years to stay on schedule. Not very believable.
 
The entire climate scam crumbles on this one observation because it shows that there is nothing unusual about today’s temperature and ALL claims of unusual climate are based on claims of excess warmth caused by man’s CO2.

Evidence that the Roman & Medieval warm periods were global: 
http://www.debunkingclimate.com/warm_periods.html
Evidence that those warm periods actually occurred:   
http://www.debunkingclimate.com/climatehistory.html

Much more evidence on climate:  
http://www.debunkingclimate.com

Even the IPCC debunks climate alarmism: http://www.debunkingclimate.com/ipcc_says.html

Feel free to disagree by showing actual evidence that man’s CO2 is causing serious global warming. (Or show your unwillingness to learn by posting a laughter emoji.)

July 21, 2022 5:53 pm

I’m thinking the drought and the warming trend in the Southwestern US has more to do with the PDO than CAGW. Once the PDO descends into a negative phase we should cool dramatically with precipitation also increasing. This is connected to solar activity. Any thoughts on this? My theory is that this past warming trend is driven by the PDO driven by changes in solar activity.

Reply to  John
July 22, 2022 2:30 pm

The US southwest has been a desert/semi-arid desert for thousands of years. I wouldn’t expect a *lot* of precipitation increase. Cooling maybe but cool air doesn’t generate the rain that warmer air does.

Geoff Sherrington
July 21, 2022 5:53 pm

There are many claims that the Earth has warmed over the last 150 years. There is always an uncertainty term to consider. If the 95% uncertainty is such that temperatures are good only to plus or minus half a degree C, then warming of claimed 1 degree C in 150 years is all inside the uncertainty limits. It has a probability of not being there at all.
Clearly, a great deal of attention needs to go into uncertainty estimates.
After 6 years of asking our Australian authorities I have finally got an answer that the routine historical daily temperatures have 95% confidence envelopes in the range of +/- 0.1 to 0.4 C., depending on several factors like type of thermometer, years of deployment etc.
There are unrealistic, IMHO. Temperatures from the 1850s have a lot more uncertainty than that, particularly lack of global coverage which is not considered in those official estimates.
What uncertainty figures are realistic?
Have you national or local authorities published estimates? What are they?
This uncertainty factor is one of the biggest criticisms of allegations of global warming. Geoff S

MarkW
Reply to  Geoff Sherrington
July 22, 2022 9:32 am

Prior to the age of digital thermometers, all readings were rounded to the nearest degree.
This adds 0.5C of uncertainty on top of the confidence interval of the thermometer itself.

Reply to  Geoff Sherrington
July 22, 2022 11:17 am

Let’s also not forget that before min/max LIB thermometers, the readings were taken manually at times during the day that might not capture the actual high or low temps. This increases the uncertainty tremendously. Anytime I see see anomalies in the one hundreths or one thousandth place prior to WWII, I question the ability of the scientists in analyzing uncertainties. Too many times they want to “create” a Standard Error of the Mean (SEM) and treat that as the uncertainty. They have no clue about what they are doing.

Reply to  Jim Gorman
July 22, 2022 1:41 pm

min/max mechanical thermometers today can have an uncertainty of +/- 4F from the manufacturer. It is doubtful that uncertainty gets *better* when it is placed in the field.

Reply to  Tim Gorman
July 23, 2022 7:38 pm

+/-4F? The brochure I read was for half that. You might very well be right, but would you please doc?

Reply to  bigoilbob
July 24, 2022 9:07 am

I’m not your research assistant. Do your own research. I’m sure what you saw as +/- 2C, not +/- 2F.

Reply to  Tim Gorman
July 24, 2022 9:20 am

“I’m sure what you saw as +/- 2C, not +/- 2F.”

Perhaps. But per the rules of the road, your claim, your responsibility to back it up. Or, for the umpteenth time, per Chris Hitchens:

“That which can be asserted without evidence, can be dismissed without evidence.”

Reply to  Geoff Sherrington
July 22, 2022 1:39 pm

 the routine historical daily temperatures have 95% confidence envelopes in the range of +/- 0.1 to 0.4 C., depending on several factors like type of thermometer, years of deployment etc.”

These *are* unrealistic. Even today’s modern Argo floats have an uncertainty of +/- 0.5C. The US Federal Meteorology Handbook No. 1 specifies that federal temperature measurement devices only have to meet a +/- 0.6C uncertainty.

The best LIG min/max thermometers *today* have uncertainties greater than this, As much as +/- 4F (+/- 2C).

Too many people today assume the sensor resolution capability is the uncertainty of the device. That appears to be what you were quoted. RESOLUTION IS NOT UNCERTAINTY. You can easily have a high resolution device whose reading is very inaccurate with a large uncertainty. In fact, the higher the resolution of the sensor the easier it is for the whole device to be inaccurate because of component drift in the device.



Art
July 21, 2022 7:55 pm

The last eight years are the warmest eight years since records began in the mid-1800s….except for the 1930s.

Craig from Oz
July 21, 2022 8:54 pm

I love the way the authors of this document attempt to claim there is no pause by… saying there is a pause.

In fact, the last eight years have been unusually warm

Not actually the question. The question was is it continuing to get warmer.

In simple terms it is the difference between velocity and acceleration. Not really all that advanced topics.

One wonders if these authors don’t actually understand the different, or they do and are deliberately attempting to word salad the readers into trusting them.

Reply to  Craig from Oz
July 22, 2022 2:58 pm

It’s ever worse than this. The *anomalies* are unusually large. But there is no way to judge what has caused the anomalies to grow. Is it growth in minimum temps affecting the annual averages? Is it growth in the maximum temps that is affecting the annual averages? (annual average – long term average = anomaly) Or is it a combination of both?

You can’t tell from just looking at the anomaly by itself. But the CAGW advocates *ALWAYS* assume the anomaly growth is from higher and higher max temps that are going to turn the earth into a cinder – *ALWAYS*!

Where is the catastrophe if it is minimum temps going up?

July 21, 2022 9:17 pm

How does the Infrared trapped heat from atmospheric CO2 get into the oceans? It’s not from radiation as the oceans are opaque. So it must be convection or conduction. Yet, there is no explanation for that. Saying it goes into the oceans is silly without a mechanism.

MarkW
Reply to  Doonman
July 22, 2022 9:34 am

The infrared does not put heat into the oceans. It makes it harder for the heat that short wave radiation is putting into the oceans, to get out.

Reply to  MarkW
July 22, 2022 3:01 pm

What absorbs the longwave IR on the land surface? Silicon and gypsum, the two most common substances on land don’t respond much to IR at CO2 wavelength. Neither does plant life or fresh water.

MarkW
Reply to  Tim Gorman
July 22, 2022 8:57 pm

Why fixate on just long wave? Both long wave and shortwave radiation heat land surfaces. And just as with water, the rate at which the heat comes out of the land depends on the temperature difference between the land and the air.
The warmer the air, the slower the heat comes out of the land.

Reply to  Doonman
July 23, 2022 4:37 am

The warmer atmosphere heats the surface layer of the oceans. the warmer surface layer impedes heat loss from lower layers of the ocean.

The main source of ocean heat is sunlight. Additionally, clouds, water vapor, and greenhouse gases emit heat that they have absorbed, and some of that heat energy enters the ocean. Waves, tides, and currents constantly mix the ocean, moving heat from warmer to cooler latitudes and to deeper levels.

Covering more than 70% of Earth’s surface, our global ocean has a very high heat capacity. It has absorbed 90% of the warming that has occurred in recent decades due to increasing greenhouse gases, and the top few meters of the ocean store as much heat as Earth’s entire atmosphere.

RoHa
July 21, 2022 10:12 pm

Melanie Phillips is insane, but Christopher Monckton isn’t (even if he did carry water for Thatcher in his callow youth) and can be taken seriously.

RoHa
July 21, 2022 10:13 pm

And as I’ve said before, all the Missing Heat (TM) has collected at the bottom of the sea, and one day soon it will suddenly rise to the surface and go rampaging across Tokyo. You mark my words.

H.R.
Reply to  RoHa
July 22, 2022 7:57 am

No worries. Mothra will deal with it. Mothra needs something down there to keep warm, so that heat isn’t going anywhere without Mothra putting up a struggle.

(Anybody got a better explanation?)

Jon
July 21, 2022 10:31 pm

Well, it’s always something isn’t it. I still don’t buy that we are causing great harm and heating up the planet with GreenHouse Gases. The number one greenhouse gas is water vapor from what I’ve read. Then we have volcanos that can thrust us into a dark age and an ice age. Things coming out of space and striking the planet. Those are game changers and much more dangerous than C02. IMHO But the climate alarmists are looking for relevance and a pay check, so we should all follow blindly and continue greening our Grid. When you can’t charge your Tesla and your cell phone goes dead, just hope that pocket sized solar battery pack you bought works, and you have a USB cable to charge your phone, maybe. Then you can call somebody who cares or the Police, if their phones aren’t dead.

July 21, 2022 10:33 pm

The Rise and Fall of China’s dynasties may be ascribed to climate change – the climate changes closely approximate the periods of climate change as outlined by author Worrall.

This relationship between temperature and dynastic potency was first drawn by a meteorologist named Zhu Kezhen in a 1972 paper. Zhu was one of the first Chinese PhD graduates of Harvard University and helped lay the foundations of modern meteorology on the mainland (The link at the base of this note will bring up the image belonging to this comment).

In his paper, the last he wrote and considered a classic for its elegant prose and bold conclusions, Zhu drew a graph plotting temperatures in the Yellow River region from 1500BC to 1950. Based on archaeological artefacts and historical documents, the graph charted the rises and falls in temperature.

It showed that there were three extended periods of warm temperatures.
The first coincided with the Shang dynasty (1600BC-1046BC), when the annual average temperature reached as high as 11.3 degrees Celsius. This period saw the emergence of the first comprehensive set of Chinese characters, massive construction of palaces and cities, large-scale farming and the production of systematic astronomical records and sophisticated bronze wares.

The second extended period of warm temperatures lasted more than 700 years, from the Eastern Zhou dynasty (770BC-256BC) to the Western Han dynasty (206BC-9AD), when average temperatures peaked at 10.7 degrees Celsius. In the Eastern Zhou, China’s territory expanded from the Yellow River to Guangdong, Yunnan and Sichuan. There was an enormous bamboo forest along the Yellow River, while the Yangtze River cut through lush rainforest. Slavery was abandoned, iron tools became popular in farming, and Confucius and other scholars established the philosophies that shape Chinese society today. By the time temperatures started to dip, China had built the Great Wall, a national network of roads and conquered Xinjiang, Vietnam, Taiwan and Korea.

A third warm period, when average temperatures peaked at 10.3 degrees, coincided with the Tang dynasty, widely seen as the peak of Chinese civilisation. Some historians estimated China accounted for 60 percent of global gross domestic product during this era. From textiles, ceramics, mining, and shipbuilding to paper making, China led the world in almost every sphere. There was also a spiritual and cultural boom – there were more poets in the Tang than at any time in history. In between these great dynasties, average temperatures plunged and chaos reigned. The Chinese empire retreated and was even driven into the sea by the invading Mongols who established the Yuan dynasty (1271-1368). The longest period of relative cold lasted from the end of the Tang to the fall of the Qing dynasty in 1911.

Now temperatures are on the rise again, matched by scorching economic growth. According to the Yellow River Conservancy Commission, the average annual temperature was 10.3 Celsius from 2001 to 2007 – the same as in the Tang dynasty.

Zhu’s research was based on records which make for interesting comparisons with the present day. Rice could be harvested twice a year to the north of the Yellow River in the Eastern Zhou dynasty, whereas the region is generally dry now. Plum trees were common along the Yellow River.

Source URL (retrieved on Aug 28th 2013, 12:35pm): http://www.scmp.com/article/700638/china-gives-history-lesson-warming

ralfellis
July 21, 2022 11:20 pm

We still do not know what causes warming.

Dr John Christy of the UAH temperature dataset, says that Tropical Tropospheric Temperature (TTT) record indicated that greenhouse warming only accounts for 1/4 of the total warming claimed by the IPCC.

So what causes the rest? Data manipulation by ‘scientists’ may be one reason for the alarmist data.

Another reason may be Chinese industrial dust on northern ice sheets, lowering its albedo and allowing more insolation absorption. But if Chinese industry falters, as appears to be happening, then Chinese emissions will reduce.

So we may end up with less data manipulation, and less Chinese-induced albedo reductions, leading to lower temperatures over the next decade. And do remember that the AMO and PDO oceanic cycles should be in cold mode too, which would add to cooling effects.

Ralph

Bruce Cobb
July 22, 2022 1:35 am

You can tell how much of a problem The Pause is for the Warmunists, (even though still just a baby Pause at this point), by the knots they tie themselves into trying to Climatesplain it away.CO2 Heat is special, with apparently magical powers according to them. Hilarious.

July 22, 2022 6:57 am

Several points any article about UAH data should make:

Data begins in 1979, during a global warming trend that began in 1975

There is much less infilling (guessing) than in surface data

The measurements are made in a consistent environment where the greenhouse effect occurs

The rate of warming is slower than in all surface averages.

The rate of warming over the oceans is significantly slower than over land

The global warming is not consistent every year, or in every five-year period.

There were two large heat peaks (in 1998, and in late 2015 / early 2016), caused by El Nino heat releases, which were not caused by CO2 emissions. But they do contribute to the rising linear trend line since 1979.

The 2015 to 2022 period, for one short term example, had a flat global average temperature trend.

This article has a biased focus on the 2015 to 2022 period
without the context of many other important facts about UAH data.

Schrodinger's Cat
July 22, 2022 9:12 am

How does 93% of the alleged trapped heat go into the oceans? What is the mechanism?

Matt G
Reply to  Schrodinger's Cat
July 22, 2022 11:00 am

It doesnt, LWR can only contact the skin layer on the surface of the water increasing vaporation and latent heat. It can’t magically warm 200m to 700m or more of ocean depth despite the alarmists BS. The warming of the oceans are from decreasing global cloud albedo of around 1% per decade, causing more SWR heating the oceans down to 200m depths. At depths the SWR reaches this can be easily mixed in with greater depths in the general ocean circulation, with rising and sinking water.

July 22, 2022 11:34 am

The Minoan culture thrived from around 2750 BC, as did many other cultures, when the GISP2 series shows Greenland being the coldest for 3450 years, the culture collapsed around 1200 BC at the end of a super solar minimum which began 1250 BC.

With low solar, negative North Atlantic Oscillation conditions will increase, driving a warmer AMO, which adds to the direct Greenland warming from the negative NAO conditions. Plus the negative NAO is associated with slower trade winds so El Nino conditions increase, and the Mediterranean Sea also becomes warmer. Crete would have seen harsher winters, and from drought to medicanes in the summers.

3450 years later in the 700’s AD had the warmest northern European summers of the Medieval Warm Period according to Esper et al 2014. 3540 years before 2750 BC was the 8.2kyr event, when there were expansions of village settlements all around the northern hemisphere. An early Harappan expansion, Lepenski Vir in Serbia, and wheat growing and a boatyard at the Isle of Wight England.

As for the Little Ice Age, there should be some colder periods in Greenland then, when Europe was very warm during the first half of the 1500’s, and the very warm 1610’s to 1660’s. Though if we compare GISP2 to a proxy of sea surface temperatures by southeast Greenland, the warmer Oort solar minimum, labeled “Medieval Warm Period” in the post chart, is profound, but the warming during the Maunder and Dalton solar minima are not at all apparent in GISP2.

(note the post GISP2 chart time scale is not linear)

comment image

Reply to  Ulric Lyons
July 24, 2022 2:37 am

Typo.. 3540 years before 2750 BC should have been 3450 years before 2750 BC.

Richard Hill
July 22, 2022 12:40 pm

Why do we rely upon temperature when the most obvious and unbiased measure is sea level rise? There are hundreds of tide gauge records kept by NOAA, that all show the same steady rise of sea level with NO acceleration when CO2 began to be added in quantity after 1950.

Ice and snow don’t melt in response to politics, only to heat input and there’s no additional melt rate from the massive addition of CO2, ergo it’s irrelevant. For example

https://tidesandcurrents.noaa.gov/sltrends/sltrends_station.shtml?id=8518750

from the larger records at

https://tidesandcurrents.noaa.gov/sltrends/sltrends.html

Also of course, the pause reflects whatever natural phenomena is actually resulting in our planet warming slowly

Matt G
Reply to  Richard Hill
July 23, 2022 3:46 am

Increasing igneous rock building in the oceans from volcanic eruptions causes sea levels to rise, even if the global temperatures were the same for 100s of years. Over the recent decades/centuries even new islands have formed and this is of course going to cause sea levels to rise because the igneous rock is displacing the water.

This is not accounted for at all when dealing with rising sea levels.

Randy L
July 25, 2022 7:53 am

A bit deceiving that the time scale on the left side of the chart is much more compressed than the periods on the right (Little Ice Age side).