By Andy May
This is the text of my presentation on Tom Nelson’s podcast which can be viewed here. The question and answers start at about 18:15 into the interview.
The first IPCC Physical Science Basis report is called “FAR” and was first published in 1990. An updated 1992 version of the report contains this statement:
“global-mean surface air temperature has increased by 0.3°C to 0.6°C over the last 100 years … The size of this warming is broadly consistent with predictions of climate models, but it is also of the same magnitude as natural climate variability. … The unequivocal detection of the enhanced greenhouse effect from observations is not likely for a decade or more.”
(IPCC, 1992, p. 6).
This was an accurate statement at the time, and it is mostly accurate to this day. In the past century (since 1920) temperatures have increased about one degree and I’m not sure we will be able to detect a human enhanced greenhouse effect in ten years, or ever, but otherwise the quote is still accurate. One degree of global warming in a century is well within natural climate variability according to historical records and records of glacier advances and retreats (Vinós, 2022, pp. 89-107).
Glaciers exist today, where no glaciers existed during the Medieval Warm Period from about 800 to 1200AD and during the Holocene Climatic Optimum from about 7500 to 4500BC. In addition, the Vikings farmed parts of Greenland where permafrost exists today. Ötzi, the Tyrolean iceman, who was frozen into a glacier about 5,000 years ago, and only recently discovered in his glacier tomb, can attest to the fact that modern glaciers are more advanced than they were before 3000BC.
The second report, called SAR was published in 1996 and 1997. Chapter 8 was a major issue when it came out because in the original draft, the scientists who wrote it all agreed to include this statement:
“no study to date has both detected a significant climate change and positively attributed all or part of that change to anthropogenic causes.”
(Final draft, approved by all 36 authors, SAR, July 1995)
Yet, in the final meeting of the IPCC supervising committee of government politicians, the editors and lead authors of the IPCC on November 29th, 1995, which went very late and into the early morning of November 30th, this statement was changed to read:
“The balance of evidence suggests a discernible human influence on global climate.”
(IPCC, 1996, p. 4).
This change was agreed by the lead authors and political representatives of the participating countries, and without consulting the scientists who wrote and approved the final draft months earlier (May, 2020c, pp. 230-235). The change caused an uproar in the scientific community with Frederick Seitz, the 17th president of the United States National Academy of Sciences, writing about it in the Wall Street Journal (1996), under the headline “A Major Deception On Global Warming.”
In the article, Seitz writes:
“In my more than 60 years as a member of the American scientific community, including service as president of both the National Academy of Sciences and the American Physical Society, I have never witnessed a more disturbing corruption of the peer-review process than the events that led to this IPCC report.”
Frederick Seitz, the 17th president of the United States National Academy of Sciences
He did not choose the word “corruption” lightly.
The third report “TAR” was published in 2001. It was seriously tarnished by the inclusion and promotion of the notorious “hockey stick” graph that was later shown to be seriously flawed due to major statistical errors and the inclusion of seriously flawed data.
Even so, the IPCC included the following statement that was based on the flawed hockey stick:
“In the light of new evidence and taking into account the remaining uncertainties, most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gas concentrations.”
(IPCC, 2001, p. 699).
Numerous reports, peer-reviewed articles, most notably by Stephen McIntyre and Ross McKitrick, Edward Wegman, and the U.S. National Research Council, detailed the numerous flaws in the graph (May, 2020c, pp. 164-198). Analysis showed that random red noise could be fed into the statistical algorithm that was used to create the hockey stick and it still produced hockey sticks.
The fourth report “AR4” issued this statement:
“Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
(IPCC, 2007b, p. 10).
This was very much like what was written in TAR where the same conclusion was based on the now discredited hockey stick. AR4 backed away from the hockey stick, admitting it was flawed, but it also claimed that there was a very high chance that the Himalayan glaciers would melt by 2035. This is an impossibility it turned out, and the head of the AR4 effort, Rajendra Pachauri had to back down and apologize for the error.
This and other problems with the report led to a U.N. InterAcademy Council investigation that found that the IPCC guidelines for their reports had not been followed and that serious bias had crept into AR4. They also found that a full range of peer-reviewed views were not included.
AR5, published in 2013, included the following statement:
“More than half of the observed increase in global mean surface temperature (GMST) from 1951 to 2010 is very likely due to the observed anthropogenic increase in greenhouse gas (GHG) concentrations.”
(IPCC, 2013, p. 869)”
This is very similar to the conclusions of TAR and AR4, but no new evidence is included in the report. Importantly, John Christy, Ross McKitrick, and others had warned the authors of the report that the climate models they were using predicted much more warming in the tropical troposphere than was observed (see figure 1). Still later, Ross McKitrick and John Christy showed that nearly all the AR5 models predicted too much warming at a statistically significant level (McKitrick & Christy, 2018) and this excess warming was dubbed the “hot spot.”
The hot spot still exists in AR6 and has gotten worse (McKitrick & Christy, 2020). It is notable that if the human greenhouse gas emissions are removed from the climate models the fictitious hot spot goes away and the models move much closer to observations.
In AR6 we read the following:
“The likely range of human-induced change in global surface temperature in 2010–2019 relative to 1850–1900 is 0.8°C to 1.3°C, with a central estimate of 1.07°C, encompassing the best estimate of observed warming for that period, which is 1.06°C with a very likely range of [0.88°C to 1.21°C], while the likely range of the change attributable to natural forcing is only –0.1°C to +0.1°C.”
(AR6, page 59).
Thus, they now claim that it is likely all the warming since the 19th century is due to humans. And this is despite the fact that in the tropical troposphere their climate models are statistically invalidated if they include human greenhouse gas emissions in the model.
They were warned to avoid confirmation bias and that the AR5 models were running too hot.
Yet, in AR6, they made the models run even hotter than in AR5 and they ignored dissenting opinions by Richard Lindzen, Roger Pielke Jr., John Christy, Ross McKitrick and many other prominent climate scientists. This is illustrated in figure 2.

Notice the range of AR5 model results do not touch 0.6, yet in AR6 they do.
In AR6, notice the coupled ocean/atmosphere models (red boxes) produce higher sea surface temperatures than the observed sea surface temperatures (blue boxes). The model/observation mismatch in sea surface temperatures in the Pacific Ocean is a very serious problem.
Besides sea surface temperatures, the IPCC/CMIP climate models have a serious problem with clouds. They cannot model clouds. It is well known and accepted that clouds are net cooling, but how do they respond when surface temperatures rise? What is the net feedback of clouds when the world warms? They don’t know and the uncertainty in the cloud response to warming is nearly as large as the total uncertainty in all modeled surface warming feedbacks.
We find this in AR6 on the subject:
“… CMIP6 models have higher mean ECS and TCR [climate sensitivity to greenhouse gases] values than the CMIP5 generation of 50 models. They also have higher mean values and wider spreads than the assessed best estimates and very likely ranges within this [AR6] Report. These higher ECS and TCR values can, in some models, be traced to changes in extra-tropical cloud feedbacks that have emerged from efforts to reduce biases in these clouds compared to satellite observations (medium confidence). The broader ECS and TCR ranges from CMIP6 also lead the models to project a range of future warming that is wider than the assessed warming range”
(AR6, p 927).
Translation: We adjusted our cloud feedback parameters to try and fix the mismatch with the real world and when we did that, the already too-warm models got worse. They are clearly in that stage of their modeling effort that every time they try and fix a mismatch, they break something else. It is a sign that their models are missing some vital component of climate.
Figure 3 is a plot of model climate feedback to model calculated ECS or equilibrium climate sensitivity to a doubling of CO2 (Ceppi, Brient, Zelinka, & Hartmann, 2017). Remember cloud feedback cannot be modeled, it must be input to the model via user adjustable parameters. The plot tells us that 71% of the model computed ECS is determined by these user input parameters. The models can literally produce almost any ECS the modeler desires.

As previously mentioned, the IPCC climate models have a hard time with sea surface temperatures. They not only predict higher sea surface temperatures than observed, they also get the pattern of warming and cooling oceans wrong. It seems they have decided that their models must be correct, so they have assumed that the feedbacks must be changing, and this has screwed them up.
They are fundamentally changing their models such that they cannot be refuted by observations. By hypothesizing a continually changing climate state, they are making their already unfalsifiable ideas even more unfalsifiable. As Karl Marx and his followers found out, if your hypothesis is fluid enough, you can conclude whatever you want, and no one can challenge you. From Karl Popper, 1962, page 37:
“The Marxist theory of history, in spite of the serious efforts of some of its founders and followers, ultimately adopted [a] soothsaying practice. In some of its earlier formulations (for example in Marx’s analysis of the character of the ‘coming social revolution’) their predictions were testable, and in fact falsified. Yet instead of accepting the refutations the followers of Marx re-interpreted both the theory and the evidence in order to make them agree. In this way they rescued the theory from refutation; but they did so at the price of adopting a device which made it irrefutable. They … destroyed its much-advertised claim to scientific status.”
(Popper, 1962, p. 37).
So now AR6 claims that as surface temperatures rise, the feedbacks to that warming change. In one fell swoop, they both explain why their models do not match observations and they invalidate those pesky observation-based calculations of climate sensitivity that are so much less than their model-based estimates.
As you can see in the AR6 maps in figure 4, the modeled ocean temperatures are much simpler than the observed pattern. Further, the cloud cover over South America is increasing, not decreasing as predicted. The models expect the eastern Pacific to warm much more than observed and the western Pacific is warming much more than predicted. The pattern is wrong.

They claim that the models are OK, they just need to adjust their feedbacks. Richard Seager and his colleagues write:
“The tropical Pacific Ocean response to rising GHGs impacts all of the world’s population. State-of-the-art climate models predict that rising GHGs reduce the west-to-east warm-to-cool sea surface temperature gradient across the equatorial Pacific.
In nature, however, the gradient has strengthened in recent decades as GHG concentrations have risen sharply. This stark discrepancy between models and observations has troubled the climate research community for two decades. … The failure of state-of-the-art models to capture the correct response introduces critical error into their projections of climate change in the many regions sensitive to tropical Pacific sea-surface-temperatures.”
(Seager, Cane, & Henderson, 2019)
AR6 has its own version of the TAR hockey stick, and it is just as flawed as the first one. They also published a refutation of their own version on page 316 of their report, as shown in figure 5. They want us to believe that the last decade was warmer than any decade in the past 125,000 years. The data they rely on from 10,000 years ago to 2,000 years ago only has century (10 decades!) resolution by their own admission. I added the red circle, arrows, and brackets to their figure 2.11.
Notice especially the bracket. The uncertainty bars in their plot of temperatures from 10,000 years ago to 2000 years ago are larger than all the recent warming. In other words, their own data does not support their statement. They can’t possibly tell us anything about how the most recent decade compares to any decade prior to around 1850, at the end of the Little Ice Age.
In closing, I could go on and on, but the bottom line is that AR6 is the worst and most biased IPCC Physical Science Basis report ever. SAR through AR5 were bad, but AR6 is beyond help.
Take this from one of the few who has read all of them.
It is very clear that the IPCC is losing the public, polls repeatedly show the world population does not believe global warming is a priority. Recent polls show that skepticism about human-caused climate change is increasing around the world. A recent University of Chicago poll found that the belief that humans have caused all or most climate change slumped to 49% from 60% just five years ago. Seventy percent of the U.S. public are unwilling to spend more than $2.50 a week to combat climate change.
60% of U.S voters believe that climate change has become a religion that has nothing to do with climate. Billions of dollars, six major reports that total 6,543 pages (2,391, or nearly half of them are in AR6 as shown in figure 6) and a total of 47 reports of all kinds, and the public has not been convinced that climate change is important. It’s time for the IPCC to reform or give up, in my opinion.
For more details about the flaws in AR6 read the Clintel report. It was created by an international team of scientists, from seven countries around the world. It has been extensively peer-reviewed by some of the world’s top climate scientists. The cover is shown below as figure 7. It is available as a low resolution pdf for download at clintel.org and will be for sale as a proper ebook or paperback at Amazon, Kobo, and Barnes and Noble on May 29th.

Download the bibliography here.



“the IPCC/CMIP climate models have a serious problem with clouds. They cannot model clouds. It is well known and accepted that clouds are net cooling, but how do they respond when surface temperatures rise? What is the net feedback of clouds when the world warms?“. I answered those questions not long ago.
Clouds Haven’t Behaved the Way the IPCC Or the Models Say
A straightforward examination of global cloud data showed that cloud cover increases when surface temperatures rise and that net cloud feedback is negative.
The GIGO computer games assume increased water vapor is a positive feedback. This requires ignoring the effect of evaporative cooling and parameterizing clouds as a positive rather than negative feedback.
That the net effect of clouds is cooling was long settled science, until the gravy train supporting narrative needed shoring up. Computers need at least 100,000 times as much resolution actually to model clouds.
Shameless.
Surely if cloud feedback was positive,
We would have been toast from the get-go?
Yes, Earth’s atmosphere would have been a cloud covered steamy ball for a couple of billion years, followed by losing most of it’s water to outer space by the present time….Instead, being mostly a liquid water covered planet, as the vapor pressure of water (so its concentration in the atmosphere) rises at 7% per degree, the formation of clouds reflect sunlight back into outer space. This controls the planet’s surface temperature, with variation dependent on the rate at which sea surfaces can absorb and emit the energy the clouds have not reflected…this with a variance of only a degree or so of sea surface temperature over a few hundred years.
Having thoroughly beaten the Young Earth out of popular conversations, culture warriors find no way to convince people their morning commute is destroying a billions year old planet.
Come on, man. The Science. It’s common sense. Would you prefer to go to the beach on a nice hot cloudy day or a cold blue sky day with all that glare? I mean how many times were you laying on the beach toasty warm when suddenly the clouds rolled away and the sun got in your eyes and you started to shiver?
It seems like we live in two realities just like BarryO said.
Did you forget the sarc ?
I don’t do that. Nor do I advocate having ‘Stop Sign Ahead’ Sign Ahead signs.
Most of my family prefer to swim and frolic at the beach, then set up and fish until sunset.
Dark overcast days make for better fishing.
Sunbunnies tend to be a minority. Some will sun themselves temporarily, most retreat to their shaded umbrellas.
Beaches, especially road accessible beaches may look crowded, but they represent a miniscule amount of land area that makes small numbers of people look crowded.
My opinion. Clouds won’t ever be modeled. They are a chaotic result of several mechanisms, not the least are clouds over sea vs land. Any attempt to “capture” them is doomed to large error
“Computers need at least 100,000 times as much resolution”
?
They get it completely wrong. Not only do you get evaporative cooling from increases in CO2 DWIR, you don’t get any warming due to boundary layer equilibrium. As such, it’s good CO2 absorbs a little more energy to offset the cooling.
IT IS NOT A NEGATIVE FEEDBACK FOR CO2! It is much greater direct effects of the processes of evaporation and condensation of water. Water evaporation is an indothermic process that cools the surface water to near the atmospheric dew-point. That evaporated moisture is lighter than air and rises to where it condenses (exothermic process) at the bottom of clouds. There is not much difference in dew point temperatures at the surface and at the bottom of clouds so radiation between the surface and the bottom of clouds is much less than clear sky radiation.
2 points
If the amount of radiation decreases in the presence of clouds, that by definition is a negative feedback.
Secondly, the surface doesn’t radiate directly to space, even when there is no water vapor in the atmosphere.
All capitals traditionally indicates shouting or emphasis – use sparingly.
“They are clearly in that stage of their modeling effort that every time they try and fix a mismatch, they break something else. It is a sign that their models are missing some vital component of climate.”
https://youtu.be/mOA_SUKEZRE
“Even so, the IPCC included the following statement that was based on the flawed hockey stick:”
The hockey stick is not flawed, and has since been reproduced by many scientists, using independent methods.
But that IPCC statement was certainly not based on it. Here is the relevant page of the AR3:
It gives seven bullet points in support. The only mention of paleo is in the last sentence of the first point:
“Reconstructions of climate data for the past 1,000 years (Figure 1b) also indicate that this warming was unusual and is unlikely to be entirely natural in origin”
IOW it doesn’t say anything about what caused the warming. It just says that it was unusual.
The hockey-stick certainly is flawed.
“Andrew Montford’s The Hockey Stick Illusion is one of the best science books in years. It exposes in delicious detail, datum by datum, how a great scientific mistake of immense political weight was perpetrated, defended and camouflaged by a scientific establishment that should now be red with shame. It is a book about principal components, data mining and confidence intervals—subjects that have never before been made thrilling. It is the biography of a graph.
[] A retired mining entrepreneur with a mathematical bent, McIntyre asked the senior author of the hockey stick graph, Michael Mann, for the data and the programs in 2003, so he could check it himself. This was five years after the graph had been published, but Mann had never been asked for them before. McIntyre quickly found errors: mislocated series, infilled gaps, truncated records, old data extrapolated forwards where new was available, and so on. Not all the data showed a 20th century uptick either. In fact just 20 series out of 159 did, and these were nearly all based on tree rings. In some cases, the same tree ring sets had been used in different series. In the end the entire graph got its shape from a few bristlecone and foxtail pines in the western United States; a messy tree-ring data set from the Gaspé Peninsula in Canada; another Canadian set that had been truncated 17 years too early called, splendidly, Twisted Tree Heartrot Hill; and a superseded series from Siberian larch trees. There were problems with all these series: for example, the bristlecone pines were probably growing faster in the 20th century because of more carbon dioxide in the air, or recovery after “strip bark” damage, not because of temperature change. This was bad enough; worse was to come. Mann soon stopped cooperating, yet, after a long struggle, McIntyre found out enough about Mann’s programs to work out what he had done. The result was shocking. He had standardised the data by “short-centering” them—essentially subtracting them from a 20th century average rather than an average of the whole period. This meant that the principal component analysis “mined” the data for anything with a 20th century uptick, and gave it vastly more weight than data indicating, say, a medieval warm spell.
[] As a long-time champion of science, I find the reaction of the scientific establishment more shocking than anything. The reaction was not even a shrug: it was shut-eyed denial. If this had been a drug trial done by a pharmaceutical company, the scientific journals, the learned academies and the press would have soon have rushed to discredit it—and rightly so. Instead, they did not want to know. Nature magazine, which had published the original study, went out of its way to close its ears to McIntyre’s criticisms, even though they were upheld by the reviewers it appointed. So did the National Academy of Sciences in the US, even when two reports commissioned by Congress upheld McIntyre. So, of course, did the IPCC, which tied itself in knots changing its deadlines so it could include flawed references to refutations of McIntyre while ignoring complaints that it had misquoted him. The IPCC has taken refuge in saying that other recent studies confirm the hockey stick but, if you take those studies apart, the same old bad data sets keep popping out: bristlecone pines and all. A new Siberian data series from a place called Yamal showed a lovely hockey stick but, after ten years of asking, McIntyre finally got hold of the data last autumn and found that it relied heavily on just one of just twelve trees, when far larger samples from the same area were available showing no uptick. Another series from Finnish lake sediments also showed a gorgeous hockey stick, but only if used upside down. McIntyre just keeps on exposing scandal after scandal in the way these data were analysed and presented.
[]” – my bold
from The case against the hockey stick
“He had standardised the data by “short-centering” them—essentially subtracting them from a 20th century average rather than an average of the whole period. This meant that the principal component analysis “mined” the data for anything with a 20th century uptick, and gave it vastly more weight than data indicating, say, a medieval warm spell.”
It is better to subtract the global average, but it doesn’t make much difference. Where McI conned folks was by showing what it did to the first principal component P1. Yes, it did make an uptick there. But PCA is just reorienting the axes; you still have to add them up. And when you do, the uptick is gone. The uptick was taken from the other components.
McI himself in a 2005 paper showed that. He did the proper thing and redid the reconstruction (not just P1) subtracting full average. And sure enough, he got the same hockey stick as MBH. Here is his diagram:
To make a clearer comparison, I have replotted the same data:
Nick,
Short-centering is a problem. But, Mann’s real error was using EOFs at all on such data. The data all have different temporal resolutions, record lengths, seasonal sensitivities, and accuracy relative to actual local temperature. Any such statistical treatment is going to reduce the temporal resolution of the result and kill any spikes and troughs, invalidating the conclusion that today’s warming is unusual.
WRT your long quote, all seven points refer to the hockey stick in one way or another, as any careful reader can see.Seven ways to say the same thing, typical IPCC.
What are EOFs?
Joseph,
Sorry to slide into jargon. In statistics empirical orthogonal function (EOF) analysis is a decomposition of a signal or data set in terms of orthogonal basis functions which are determined from mapped data. A synonym is mapped weighted principal components analysis in geophysics. This methodology is common in geophysics and meteorology.
It is handy when you know your data well from beginning to end, as in geophysical analysis. It is misleading when dealing with data you only know at one end, like temperature proxies. Error has a tendancy to change radically with age in proxies, which was the hockey stick’s main Achilles heel.
“But, Mann’s real error was using EOFs at all on such data. The data all have different temporal resolutions, record lengths, seasonal sensitivities, and accuracy relative to actual local temperature.”
You have a bunch of vectors (proxy) that you are going to combine into a recon. Using PCA, or EOFs, is just a step along the way. Insofar as your objection has merit, it applies to doing any kind of recon.
Here we agree. Global and hemispheric reconstructions from proxies are total BS. They are not accurate enough and their temporal resolutions are too poor.
Use proxies at their locations and modern data at that location to compare to. It is the only way. See here:
https://andymaypetrophysicist.com/2021/06/23/how-to-compare-today-to-the-past/
Anduy,
“Use proxies at their locations and modern data at that location to compare to.”
That is exactly what MBH and others did. They calibrated the proxy against instrumental in one time interval, check the two in another time interval, and used that to convert the proxy to temperature.
No it isn’t, Mann and all others who create reconstructions from proxies, including me, combine multiple disparate proxies into one record for a hemisphere or the world, sometimes with an EOF (the worst way) sometimes with other methods. It is the combining that is a problem. Use the proxies one at a time, at their specific location. Read Soon and Baliunas (2003) or see my post with the link above.
You missed my point entirely.
“Use the proxies one at a time, at their specific location.”
Use them for what? What do you do with them?
To track long-term climate change, with some indication of the amplitude of the changes, at that location. The concept that “global average surface temperature” has some meaning is false. Climate changes regionally and the Northern and Southern Hemispheric mid latitudes are different from the tropics, each other, and from the polar regions. See Vinos’ book here:
https://www.researchgate.net/profile/Javier-Vinos/publication/363669186_Climate_of_the_Past_Present_and_Future_A_scientific_debate_2nd_ed/links/63296077071ea12e36487da9/Climate-of-the-Past-Present-and-Future-A-scientific-debate-2nd-ed.pdf
Nick reconstructions are the problem. Smearing individual proxy measures into a reconstruction through time using PCA wastes effort. A much cleaner way to proceed is to treat each proxy as generated by a unique stochastic process. The question is; are the temperature proxies calculated from each tree ring series stationary? If 10 rings at a site show temperature stationarity and 1 doesn’t, should the site be considered stationary in temperature? The only reason to create reconstructions is to hide behind a scary graph The hockey stick reconstruction graphs make absolutely no sense and serve no purpose. More importantly, they don’t correlate at all with actual historical data. Tony Heller provides lots of useful information on this point. (540) The World’s Smartest Person – YouTube
Andy,
“WRT your long quote, all seven points refer to the hockey stick in one way or another, as any careful reader can see.”
I would challenge you to identify any reference to HS in any point except 1. To take an example, point 3:
“Simulations of the response to natural forcings alone (i.e.,
the response to variability in solar irradiance and volcanic
eruptions) do not explain the warming in the second half of
the 20th century (see for example Figure 4a). However, they
indicate that natural forcings may have contributed to the
observed warming in the first half of the 20th century.”
Where is the hockey stick there?
McKitrick and Christy have shown the simulations are statistically invalid.
They haven’t. But that has nothing to do with Mann’s Hockey Stick.
Nick, and doesn’t Centered PCA data show a major difference? There is no milenial cold stable period followed by large recent warming. In fact, it shows there apparently was a Medieval Warm Period that was as warm or warmer than today, ending with the onset of the Little Ice Age, both conforming to history, which the hockey stick tried mightily to erase.
“both conforming to history”
Actually, it doesn’t, at least by McIntyre’s version. That puts a big warm peak in 1400-1450, just after the Norse had frozen in Greenland.
But the reason is his hanky-panky with Gaspe cedars. He removed that data with no good reason, and insufficient data remained for a reliable recon. His peak there is spurious. But it has no connection with the short-centering, which was alleged to have affected the modern end, but didn’t.
The charts make your point by nicely aligning on the right axis, but also break your point by misaligning on the left axis. If I look at the red and blue lines and try to think of them as sneaker brand sales figures or something else I’m not interested in I think “the red one really caught up”. So on net I think your data breaks your argument – and I usually upvote your posts because you use data.
It is the right axis that counts, because that is where the alleged incorrect hockey stick should appear, and it doesn’t. The left deviation is due to something else illegitimate that McIntyre did by removing Gaspe cedar data.
Nick the whole PCA analysis is a waste of time. It adds no value and actual causes a significant loss of data.
“The hockey stick is not flawed, and has since been reproduced by many scientists, using independent methods.”
Not true, of course. The Hockey Stick is a lost cause scientifically. Its main use now is as an element of the creed. Activists recite their belief in it as a way of publicly testifying to righteousness. While the intellectual leadership of the climate movement has backed off from it as an embarrassment.
“While the intellectual leadership of the climate movement has backed off from it as an embarrassment.”
Completely untrue. Here is the AR4 diagram on the topic:
MBH99 leads the way. But notice that it is right in there with another bunch of reconstructions that agree with it. Since then recons (Mann’s included) have reached further back, so MBH99 gets less emphasis. But it has stood up well.
The “hockey stick” is always misunderstood here. Proxies do largely support the instrumental rise of the last century or so, but they aren’t needed for that. What they do show is that before that temperature didn’t vary much. And that is what they all still show.
The date of AR4? Thought that was 2007. Irrelevant to the point.
Keith Briffa wrote in AR4:
“Some of the studies conducted since [TAR] indicate greater multi-centennial Northern Hemisphere temperature variability over the past 1 kyr than was shown in TAR.”
He is trying to be nice, but it is an admission the hockey stick was a failure. AR4, page 436.
Nothing in the AR$ is attributed to individual authors. And I can’t find those words anywhere. I do find this:
“Figure 6.10b illustrates how, when viewed together, the
currently available reconstructions indicate generally greater
variability in centennial time scale trends over the last 1 kyr
than was apparent in the TAR. It should be stressed that each of
the reconstructions included in Figure 6.10b is shown scaled as
it was originally published, despite the fact that some represent
seasonal and others mean annual temperatures. Except for
the borehole curve (Pollack and Smerdon, 2004) and the
interpretation of glacier length changes (Oerlemans, 2005), they
were originally also calibrated against different instrumental
data, using a variety of statistical scaling approaches. For all
these reasons, these reconstructions would be expected to show
some variation in relative amplitude.”
Later recons, as in Fig 10b which I showed, agree in all relevant respects, but do have somewhat greater variability. That can happen.
“I can’t find those words anywhere.”
I have long suspected you had a reading comprehension problem, snapshot from page 436 in AR4.
OK, so where do you get that it was written by Briffa? The coordinating lead authors were Jansen and Overpeck.
An acquaintance of mine, who was also a lead author of the chapter, told me. Briffa got no pushback on the paragraph from the other authors.
There are no levels of low to which Stokes will not stoop to shill for the IPCC.
“There are no levels of low to which Stokes will not stoop to shill for the IPCC.”
In other words Karlomonte has no intellectual reply to NS points.
Read the Durham report yet, CNN Simon?
“It is better to subtract the global average, but it doesn’t make much difference.”
Its not just ‘better’. Anything but the standard procedure is illegitimate. But don’t believe me, what do I know? Believe Ian Joliffe, who Tamino regards as the great worldwide expert on PCA.
There is no such procedure as ‘short centering’. You either do it right or you are not doing PCA at all, but some other personal and arbitrary method of data manipulation.
You know that. Better than me.
“Anything but the standard procedure is illegitimate.”
Mathematics is not laid down in legislation. I have explained what is happening downthread. I’ll rewrite it here:
“Reconstruction is a process of forming a weighted average of the proxy data vectors. PCA intervenes to express the vectors with different axes, in which you can retain only the principal components, regarding the rest as noise.
You can first subtract anything you like from the data, as long as you add it in later. It is conventional to subtract the full-time means, because otherwise a constant will turn up as a principal component, and the analysis goes better without that. Subtracting a part mean loses some of that benefit. I’m sure it was a mistake, and no-one has done it since. But it doesn’t matter much.
A proof of that is MM2005 (described here) where they did the MBH98 analysis subtracting the full average, and got the same result (with a deviation near 1400 due to some other messing they did with Gaspe cedars).”
Estimates on top of estimates on top of models, Attribution studies (more models), and yet more estimates. Oh and did I mention uncertainties? LOL
and the reading of the entrails of tree rings
It’s a tree ring circus
Reconstructions of climate data for the past 1,000 years (Figure 1b) also indicate that this warming was unusual and is unlikely to be entirely natural in origin”
Even Wikipedia would reject that claim of ‘unusual but unlikely to be natural….’ as being own research and unverifiable in any other field
An appeal to the authority of … Wikipedia?
Thats misleading what I said. I was just saying Wikipedia wouldnt allow under its rules that sort of nonsense claim of ‘ unusual but unlikely’
Unusual does not mean unique.
Not commonly occurring does not equal being the only one of its kind.
I might worry if something was genuinely unique but not if it was unusual. Particularly true of climate and weather
Indeed. But Andy May was wrong. The IPCC did not claim it was the basis for the declaration. It just gave minor support by saying it was unusual. If it was usual, there might be more doubt about AGW, but it wasn’t.
I can agree with that except when they get to the part where they say:
It is even more unlikely to be entirely human in origin, which uncovers them as the liars they are.
The IPCC and collaborators are there to deceive people into thinking we are in a climate emergency which we clearly are not.
And Nick is here to prop up their deceptions, at any cost.
“The hockey stick is not flawed”
Hahahaha.
Nick progresses from specious dissembling to outright lying.
Not polite to ridicule a person’s religious beliefs Fenlander. Let Nick practice his religion in peace!
Although it’s also awkward when Nick recites his Creed and expects us to take it on faith because several other Climastrology theologians reached the same conclusions as St Mikey
Let Nick practice his religion in peace!
Yes, but he should do it in private.
Stokes is a professional gaslighter.
Mr Stokes: Doing away with the Medieval Warm Period and the Little Ice Age did deal with a problem the “CO2 is the major cause of warming” claim had, but the temperature variations are supported by historical evidence of what crops were grown where at given times, and records of when a given place had a freeze, as well as other similar accounts. As far as I know, it is still not possible to grow barley or do dairy farming in southern Greenland.
“Doing away with the Medieval Warm Period and the Little Ice Age did deal with a problem the “CO2 is the major cause of warming” claim had”
No, the usual illogic. The claim is that “if you add CO2 to the air, it will warm” It doesn’t say that warming can’t happen for other reasons, as it clearly does in the glaciation cycle.
As for Greenland, here is Vatnahverfi
You know what I meant, temperature variation other than the Milankovich Cycles. Global warming advocates try really hard to deny any changes due to unknown factors, AKA “natural variation” not correlated to CO2 levels.
If you claim Mann’s pet graph is accurate, you are denying the LIA was historic.
“””””The claim is that “if you add CO2 to the air, it will warm” It doesn’t say that warming can’t happen for other reasons, as it clearly does in the glaciation cycle.”””””
So here we are trying to eliminate CO2 back to 200 ppm and you can’t tell if something other than CO2 could be causing the warming.
What are those other reasons Nick?
Orbital cycles are the only ones we have seen in the last million years or so. And they can’t be responsible for the current warming.
The only ones?? You aren’t saying the AMO, PDO, and the climate shifts of 1976 and 1997 have zero effect are you? Not to mention the Modern Solar Maximum.
AMO and PDO are cycles and have no long-term warming. “climate shift” is just a descriptor applied to observed warming.
Nonsense Nick. This is the raw AMO, that is, without detrending it. I see long-term warming, do you? Both the AMO and the PDO have always had long-term trends. ENSO does also, that is why you often see them detrended. Funny how those trends match solar cycles, I wonder why?
Mann’s hockey stick is so bad as to be close to scientific fraud.
It uses PCA and is not an average. But it is passed off as an average when it is no such thing.
The hockey stick looks completely different to the actual proxies that were used. Reconstructions that use averaging rather than PCA look completely different.
The hockey stick completely contradicts our well established knowledge of history, for example the history of the Nordic settlements of Greenland.
Steve McIntire showed that random data could create virtually identical hockey sticks.
The hockey stick eliminated the MWP and LIA. In other words, it’s true climate change denial.
Mann’s supporters – i.e. members of the “Team” – produced “independent” reconstructions that supported the hockey stick. What a surprise.
Mann didn’t discover the hockey stick. He manufactured it.
Chris
I don’t like inflammatory rhetoric, but cwright’s comment seems to most accurately summarize the situation. The lone opposing voice has been swept under a sea of aggressive point-scoring.
“Mann’s supporters – i.e. members of the “Team” – produced “independent” reconstructions that supported the hockey stick. What a surprise.”
Good point. Not what you would call an unbiased appraisal.
In his Climategate emails, Mann referred to the climate perfidy cabal as “The Cause”.
Religious overtones.
Conspiracy. They conspired to mannipulate the temperature record. The Climategate emails tell the tale.
“Steve McIntire showed that random data could create virtually identical hockey sticks.”
Actually it was Wegman who claimed that. And he cheated.
Nick,
“McIntyre and McKitrick demonstrated that the statistical methodology used to create the hockey stick, created the same shape from random red noise 99% of the time. McIntyre and McKitrick’s finding was supported by the National Academy of Sciences and reproduced by the Wegman Congressional investigation (Wegman, Scott, & Said, 2010, pp. 29-33). MAY, ANDY. POLITICS AND CLIMATE CHANGE (p. 76). American Freedom Publications LLC. Kindle Edition.”
“Their method, when tested on persistent red noise, nearly always produces a hockey stick shaped first principal component (PC1) and overstates the first eigenvalue.” McKintyre and McKitrick, 2005, GRL, in the abstract.
Wow Nick, You’ve really gone off the rails. Three sources reached the same conclusion and you are saying they all cheated??
The third is Wegman, but only the third.
Andy,
Third? I only see two. But they are both quoting the output of McIntyre’s program. And what it did, in testing persistent red noise, was to run 10000 instances, and show the emergence of hockey sticks. What they didn’t say was that inside the program they had selected the top 100 of those 10000 by shape (hockey-stick index – ie looks like HS). Then they selected results from that top 100.
Wegman independently reproduced the problem. The National Research Council describes the problem very well in Chapters 9 and 11 of their 2006 report, although it isn’t an independent reproduction.
In any case, the real problem is combining proxies into one regional record, it greatly reduces the true temperature variations.
The national Research Council Report:
https://climatechangelive.org/img/fck/file/surftemps2000yrs.pdf
“Wegman independently reproduced the problem.”
No, he didn’t. He used McIntyre’s program. From the Wegman report:

In fact, some of the results were identical, which, since they are supposed to be randomly generated, means that he just copied McIntyre’s files.
In fact, that is how the cheat was originally spotted. He showed 12 synthetic P1 components, from random data, with hockey stick shape. But they were all blade upward. PCA has no preference for sign; if that were real, they should have been about half up and half down. The upward alignment came from the fact that they had been artificially selected within the code by hockey stick appearance, which required upward orientation.
“No, he didn’t.”
When I peer-reviewed papers for Petrophysics that was exactly how I reproduced work. I find your definition of “reproduce” odd.
Well, if you think running McIntyre’s code with McI holding his hand is independent reproduction, well…
But he didn’t even do that. Here is his graph claiming to be a randomly generated PC1, immediately following the para I quoted above:
And here is MM2005 Fig 1 that he refers to. They are supposed to be randomly generated. But they are actually identical. Wegman just copied from McIntyre’s output, generated using the clandestine 1% selection by hockey stick schtick:
Nick, That is the way reproduction works. Can an independent reviewer run your code and get the same answer? I did the same sort of stuff for years. M&M also tried to get the code and data from Mann for years, but never got all of it, that is what started the whole problem.
“Can an independent reviewer run your code and get the same answer?”
Any modern child could run the code and get the same answer. Stats professors need help. But as I showed above, he didn’t even do that. He claims to have run the code, and he got exactly the same answer. Sounds good? No! It’s a program with random data, generated in the program. You never get the same result twice. He just copied the numbers used for the MM2005 plot and replotted them.
I have run the code many times, examples here.
So, why would Mann not send his code and data to McIntyre? That, alone, should have invalidated the hockey stick. See Montford’s book, The Hockey Stick Illusion. You have very strange ideas about reproducibility, and you are not being consistent.
Mann’s hockey stick is proven scientific fraud.
Fixed that.
When some one eliminates the four warm periods since the world emerged from the last ICE AGE 12,000 years ago and then claim that our present temperatures are unprecedented we know that the hockey stick was straight out fraud .
Hi Nick,
Can you point me to:
(a) the published peer-reviewed paper or textbook which derives the formal statistical basis and application of Mann’s uncentred PCA technique ie the actual mathematical demonstration of its value as a statistical method and its application together with limitations. While you are at it perhaps you can show the other mainstream applications of this same technique that have been published?
(b) the published paper or textbook showing how Mann’s uncentred PCA algorithm performs on trendless correlated random data eg ARMA or similar?
(c) similar to (b), the published paper where Mann and/or his co-authors, having dismissed M&M’s testing of the algorithm using red noise as being the “wrong sort of noise”, actually showed a test with “the right sort of noise” please.
Much appreciated!
Reconstruction is a process of forming a weighted average of the proxy data vectors. PCA intervenes to express the vectors with different axes, in which you can retain only the principal components, regarding the rest as noise.
You can first subtract anything you like from the data, as long as you add it in later. It is conventional to subtract the full-time means, because otherwise a constant will turn up as a principal component, and the analysis goes better without that. Subtracting a part mean loses some of that benefit. I’m sure it was a mistake, and no-one has done it since. But it doesn’t matter much.
A proof of that is MM2005 (described here) where they did the MBH98 analysis subtracting the full average, and got the same result (with a deviation near 1400 due to some other messing they did with Gaspe cedars).
Sorry Nick, Jedi mind trick misdirection doesn’t work on me.
Now, how about addressing the three points (a), (b) and (c) that I actually asked?
What I wrote does answer it. Explicitly
a. It was a mistake. There is no theory.
b. Such a test would be pointless. The purpose of Mann’s PCA is to eliminate noise. Feeding noise in would only lead to attenuation.
However, I actually did such a test here, to see the effect on P1. It was to demonstrate the effect of Wegman’s subterfuge. He claimed to be showing how Mann’s PCA made hockey sticks out of random data, but what he did was to do 10000 random runs, but then select the top 100 by McI’s hockey-stick index (ie looking like a hockey stick). Then he chose 12 from that 100.
c) The problem wasn’t type of noise. It was
1. cheating
2. Stopping at the P1 stage, and not proceeding to a full recon.
Actually, on a), I now think it may not have been a mistake. Instrumental series, like GISS and NOAA, use short-centered (30 year periods). If they don’t, they get the trend wrong, since part of the trend migrates into the means. However, MBH did not, in the end, calculate trends.
So the answer to my points are:
(a) there is no underlying theory to Manns uncentred PCA analysis, no supporting statistical theory, examples or applications using the same technique? So Mann just invented something that suited his purpose?
(b) Based on M&M, the test I mentioned is not pointless – if the algorithm “finds” hockey sticks from trendless random noise then its not fit for purpose. Based on M&M it clearly does. As someone who has worked professionally in 3D stochastic time series non-conditional and conditional simulations including via 3D FFT and both developed theory and applications (including commercial ones) I am fully familiar with testing algorithms and the various requirements for dealing with autocorrelated processes in analysis. So Nick – pull the other one, its got bells on.
(c) As I recall the criticism levelled by Mann and co-authors at M&M was “wrong kind of noise” without ever showing the result with the “right kind of noise” so you have basically avoided the point.
“Based on M&M it clearly does.“
No, it certainly does not. Firstly, of course, there is the blatant cheating where they did a hidden selection of 1% by resemblance to HS shape. But even then, there is some residual HS appearance in the first principal component P1. But this is just a transfer; that component was taken from P2, P3 etc. The basic con of McIntyre was that he showed only these components, not a completed reconstruction.
But he slipped up once, in MM2005. There he showed, as Fig 1, how a full recon with full mean compared with one with MBH style partial mean. And there is no extra HS effect fro the partial mean.
“As I recall the criticism levelled by Mann and co-authors at M&M was “wrong kind of noise” without ever showing the result with the “right kind of noise” so you have basically avoided the point.”
You’d need to flesh out that recollection. But as I say, the full recon, done by MM2005, using their kind of noise, showed no difference, so there is no point to avoid.
Sorry, that last answer is misdirected. On the reproduction using noise, the key thing is that Mann et al were not then aware of the artificial selection of HS shapes that had been done.
NS says:
“On the reproduction using noise, the key thing is that Mann et al were not then aware of the artificial selection of HS shapes”
Oh, so that’s okay then. The method is bollocks, but because they “didn’t know” it artificially selected HS shapes it doesn’t matter?
Nick, you are beyond surreal. You don’t think that before the HS became the poster child of global warming and trillions of dollars were bet on it in our reckless lurch towards Net Zero that they maybe should have checked it first?
The HS is peripheral to AGW. It says nothing about causes; it just gives a point of comparison with the past. It was a pioneering effort. But the key thing, shown by later replication, is that they got it right.
The method did not artificially select HS shapes. That was the blatant cheat in McIntyre’s program, as used by Wegman. It claimed to show the result of random inputs, but in fact interposed a selection step which took the top 1% that looked right. I’m amazed that you could defend that. Or even more that you don’t have enough interest to try.
“the HS became the poster child of global warming”
Yes, the Hockey Stick charts, are a distorted version of the global temperature profile, and are the BIG LIE of Human-caused Global Warming/Climate Change.
Without a Hockey Stick chart, climate alarmists would have *nothing* to point to as “evidence” of their claims that CO2 is overheating the Earth.
This BIG LIE has fooled millions of people and caused the waste of TRILLIONS of dollars and is about to cause the loss of our economic vitality and personal freedoms if the climate alarmists have their way.
Without the BIG LIE, the climate change alarmists have NOTHING.
And Stokes defends it up, down, and sideways, which makes Stokes a liar.
And I note you have not disputed my first point:
(a) there is no underlying theory to Mann’s uncentred PCA analysis, no supporting statistical theory, examples or applications using the same technique? So Mann just invented something that suited his purpose?
Nitpick Nick strikes again!
Nothing to say Kalomonte writes again…
The same flawed data, and minor variations on the same flawed method.
Funny how the hockey stick output doesn’t look anything like any of the inputs.
“Funny how the hockey stick output doesn’t look anything like any of the inputs.”
That goes for the instrument-era Hockey Stick, too.
The input data doesn’t show any unprecedented warming today.
The output does show unprecedented warming today.
How does one get unprecendented warming in the instrument era, out of data that does not show unprecedented warming in the instrument era? Answer: Fraud.
Nice essay. The alarmist narrative is slowly losing momentum. As usual in these episodes this is accompanied by increased frenzy on the part of the extreme wing of the movement.
In reporting the extreme wing is the Guardian, BBC, Washington Post, Ars Technica, and the paradigm case of the hysteria is the Guardian’s regular front page reports on the supposed climate crisis aka global heating. Interestingly, the NY Times appears to be slightly backing off.
In activism the paradigm case is Just Stop Oil and Extinction Rebellion, whose rhetoric is getting increasingly unhinged and whose disruptions are increasingly less tolerated by the public and are increasingly counter-productive.
What can we expect to happen next? I would predict that the academic leadership of the movement gets more and more worried about the realism of previous apocalyptic predictions, and start backing off. Maybe the second coming will not be this July, maybe we have till next April.
This leads to a split between the extremists and the intellectual leadership. Unlike movements predicting the second coming on a given date, the present hysteria is not going to have one critical refuting event (or lack of it). What is more likely is that the real cost of the measures the extremists demand becomes apparent, and arouses general skepticism and resistance. This coincides with decreasing credibility of the alarmist thesis and the backing off of the intellectual leadership. The measures, Net Zero and the Green New Deal, are diluted, postponed and abandoned piecemeal as they are electoral poison.
The rhetoric however carries on for longer. So the last true believer will be the Guardian. The BBC will probably slowly drop its alarmist material as will the Post and NPR. But the Guardian will be the last one standing.
The thing to watch as an indicator is the Net Zero programs, and the UK is the canary in the coal mine. If the UK starts backing off on the EV and heatpump agenda, at the same time as the next COP starts to back off or obviously fails to agree any global actions, then we’ll know we have gone over the peak. Unlike the religious predictions and failures this will be quite a rounded, long drawn out peak, because in the climate case the activists have been careful only to make very long term predictions. But there will be a peak, probably in the next five years, and future historians will probably regard the gradual abandonment of the Net Zero programs as the key indicator of it.
The thing to worry about is that the hysteria has penetrated the political establishment in the English speaking countries, and the useless and impossible programs they have endorsed will do an immense amount of social and economic damage before they are finally and slowly abandoned. We will never get to a wind and solar powered grid, but we may wreck a lot of lives trying.
A lovely example of what you say has just gone past my eyes and I wasn’t even searching….
(Am presently in my (morning) coffee shop and it has a big TV on the wall showing the BBC News channel = pictures with subtitles, no sound thankfully)
When I first arrived I glanced at 3 different successive stories..
Concerning sewage getting into England’s rivers and, of course, the reason for it happening and why nothing is being done is Climate Change and Sea Level RiseA story about the little e-scooters (the ones that blow up in your living room when you’re charging them). The ‘contra’ argument came from the Blind People’s Association = concerned about the scooters being left lying around pavements and tripping them up. The ‘pro’ argument started with “Oh you must remember we are in a Climate Emergency” Then another ‘contra’ came on saying that people should ride pushbikes, walk and take public transport and not use e-scooters because “people need to get more exercise” because of Gas Gas Emissions and The Climate Emergency.Concerning “Smooth Snakes” being re-introduced to somewhere in Dorset by subtracting them from some-other-where in Devon. The primary Reason why this needed to be done was………………
THAT is Magical Thinking and Propaganda = repeating the same old same old over and over again until the lie becomes true and even you yourself believe something you originally knew to be wrong.
I read the same complaining from earlier generations cloaked in codewords of the 2020s.
Variations of:
“Clean up your s&%#@&”
“It’s not my fault”
“I’m special, do it my way”
Nice post Michel!
Sadly I recall seeing the climate alarmism know being turned up to 11 around a decade ago, thinking the hysteria could not go any higher. We now seem to be past 14 and still climbing!
Its like living in a (western) world gone mad.
Do read Steve McIntyre’s climateaudit.org for his scornful destruction of the PAGES Hockeystick which frontispieces AR6 but UNBELIEVABLY doesn’t feature anywhere inside. Also read Donna Laframboise’s two books on the long history of IPCC dishonesty particularly on the chairmanship of railway engineer Rajendra Pachauri, the corrupt sexual predator.
“which frontispieces AR6 but UNBELIEVABLY doesn’t feature anywhere inside”
I don’t know the word “frontispieces“, but the point is super strong. “UNBELIEVABLY ” is the usage all-capitals typing was invented for.
Ahhh, never mind: “A frontispiece in books is a decorative or informative illustration facing a book’s title page, usually on the left-hand, or verso, page opposite the …”
Learned another thing today.
“The hot spot still exists in AR6 and has gotten worse (McKitrick & Christy, 2020). It is notable that if the human greenhouse gas emissions are removed from the climate models the fictitious hot spot goes away and the models move much closer to observations.”
This is important because the results of computer models are often heavily reliant on only one or two variables even though the model itself may have hundreds of variables.
(I used to make and sell computer models for venture capital transactions. There were loads of variables but the one main driver was the compound rate of profit growth that was input)
Even Gavin Schmidt of NASA climate model team has admitted to the “hot models” problem for climate models. In an article in Nature magazine in May 2022 he showed that if you remove the most implausible hot models the overall result drops back to something that isn’t really a climate emergency. You can get the article from Nature magazine but there is a copy on this site (about half way down) https://www.juststopnetzero.com/
David Tallboys
Ocean surface temperature cannot sustain a temperature above 30C with the present atmospheric mass. Adding 285ppm by mass will increase the limit by 0.006C. All CMIP6 models show the western tropical Pacific exceeding 30C this century or, in the case of INM, much cooler than present but increasing to 30C by the end of the century.
Open ocean surface temperate controls the energy balance. You only need to take a look over the last week in the Bay of Bengal:
https://earth.nullschool.net/#2023/05/08/0000Z/ocean/primary/waves/overlay=sea_surface_temp/orthographic=-269.53,14.22,566/loc=88.161,14.863
Hit 31C on May 8 before convective instability kicked in.
Then 5 days later back under 30C:
https://earth.nullschool.net/#2023/05/13/0000Z/ocean/isobaric/250hPa/overlay=sea_surface_temp/orthographic=-269.53,14.22,566/loc=88.161,14.863
Convection had to kick into overdrive to push it under 30C but that is the power of convection.
A delicate radiation balance is such a dumb idea. Manabe et al who dreamt up this preposterous nonsense need to be named and shamed. They have done a great disservice to humanity.
IPCC prove themselves fools because if the modelling was correct back in AR1 they would simply verify all the predictions are on track with each update. Updates would be getting shorter rather than longer. Instead they pile junk science on top of junk science so thick it stinks of desperation.
Can evaporation increase atmospheric mass?
Yes – evaporation adds water to the mass but it is surface temperature dependent and reaches its limit at 30C.
At 30C, the rainfall gets up to twice the evaporation. That happens because the air over water below 30C gets drawn into the convecting column during instability.
The mid level convergence to the unstable convecting column can spin up a cyclone if the 30C region is large enough and the latitude is higher then 7 degrees so Coriolis acceleration comes into play.
“Then 5 days later back under 30C”
This seems like a reasonable theory to me.
I guess if it happens over and over and over again, that you would be on to something.
And I believe you said elsewhere that you had found no exceptions to this 30C limit.
So, very interesting.
It is not a theory. It is well known and observable every day of the year somewhere over the oceans. It has been identified in scientific papers dating back to the 1970s at least. The physical process has not been well described quantitatively prior to my work.
Ramanathan got close before he was co-opted into the church:
https://www.nature.com/articles/351027a0
The regulating or sustainable limit is 30C, which is 303K.
All I have done is quantified why it happens:
https://wattsupwiththat.com/2022/07/23/ocean-atmosphere-response-to-solar-emr-at-top-of-the-atmosphere/
The only exception to the annual 30C limit is off the east coast of PNG due to the impact of land. Some confined water ways like the Persian Gulf do not regulate to 30C because the dry air from the north prevents convective instability.
Thanks, Rick. I like what you are doing. 🙂
The Global Warming Potential (GWP) numbers for methane CH4 are tied to the concentration of Atmospheric CO2 and change over the years, but it doesn’t look like a direct correlation:
FAR 1990 GWP 63
SAR 1995 GWP 56
TAR 2001 GWP 62
AR4 2007 GWP 72
AR5 2013 GWP 85
AR6 2021 GWP 82.5
Historical Sea Ice Extent has been truncated and changed over the years:
And below is figure 9.13 from the AR6. Interpretation is difficult especially if one attempts to compare it to the five previous graphs of sea ice extent shown in my post above. Cynics might say that’s a feature.
Ugh, AR6 changed the color scale 3/4 of the way through the graphic.
1990 seemed like such an innocent moment. I see a defensible argument that they meant to do science. What happened?
The “I” in IPCC.
Money and power got in the way.
Reviewing the problems with the IPCC reports is very nearly identical with reviewing the problems with official “climate science”.
Or reading Pravda day after day to see if today will be the day there will be something truthful in it.
Adjusting climate model parameters to make the model fit a set of observations is like squeezing an inflated balloon. Squeeze here, it bulges there.
In the models, adjust away a simulation error there, the error grows somewhere else.
Carl Wunsch has pointed out the the non-convergence of ocean models. When asked about the meaning of a non-converged model, modelers brush off with question with the comment that the results ‘look reasonable.’
For modelers, data are either confirmatory or an annoyance. Admission of ignorance — the first step of the principled scientist — is beyond their competence.
Climate models simulate a bizzaro-Earth. Modelers have cultivated a bizzaro-mentality in embracing them.
It goes a long way beyond the modellers. This is the citation for the 2021 Nobel Prize in Physics:
Science has lost any integrity it ever had. It is a political propaganda machine.
I don’t accept Nobel Prize as a proxy for science.
Have become “participation certificates”
Eisenhower was a profit…
“When asked about the meaning of a non-converged model, modelers brush off with question with the comment that the results ‘look reasonable.’”
The easy models are already taken, yet they must publish.
To me, the focus on temperature misses the forest for the trees. The question is: What bad outcomes are we trying to avoid by mitigating climate change? Show us. Show us the data that says climate change is causing any damage. Any damage. What damage is climate change causing? Show us.
There has been no in increase in the frequency or severity of hurricanes, drought, rainfall, tornados, heat waves, cold waves. The Antarctica ice pack is not melting wildly. It recently had its coldest winter on record. Sea ice floats in the Arctic all summer. No island nations have been drowned by rising seas. In fact, 75% of island nations have seen their land area grow.
I’ve asked dozens of greenies to show me evidence that severe weather has increased. Because if you can show that severe weather is increasing and if the trend continues and/or accelerates then there is a real danger to humanity. But if you can’t even show me that severe weather is increasing, why should I worry about anything? Even if temps are increasing, if it’s not causing any harm why should I worry? What harm is climate change causing? Can anyone show me? Beuller? Anyone?
It is because money is now driving the entire kit and kaboodle. It started with narcissistic scientists looking for grants but has spread far, far beyond that to power, control, and trillions of dollars to politicians and the rich.
You are absolutely right. But what I can’t figure out is why more people don’t point out the obvious silliness of their arguments. Why does no one just stand up and say: “Where’s the problem?” “What problems have increased temps or climate change ever caused? Because if it’s not causing any problems, why are we worried?”
Those people are too busy working or fishing to waste time on pointing out the obvious silliness of arguments.
Because there’s always some fire, flood, storm or something somewhere in the world going on. So when you say ‘where’s the problem?’ you give them the opportunity to say or look at you like you’re in denial; and you can never convince them without 12 months of education and even then it’s doubtful.
Figure 2.11 at least separated the “blade” (instrumental record) from the “stick” (proxy datasets).
They also included an “updated” version of the Hockey Stick graph, using PAGES2k (proxy) data from ~900AD, in the Technical Summary of the AR6 WG-I assessment report.
Attached is a screenshot of “Box TS.2, Figure 2 (panel b)”, which can be found on page 46.
Mark BLR’s Chart shows how stuff “gets interesting” after the proxy data stops. Did trees stop making rings in the 1990s?
Also: Why would “Simulated” data stop while “Observed” data keeps going?
“They also included an “updated” version of the Hockey Stick graph, using PAGES2k (proxy) data from ~900AD, in the Technical Summary of the AR6 WG-I assessment report.”
Funny how Mann’s HS was so bad and yet people keep replicating it.
Using exactly the same dodgy proxies I understand.
Wasn’t there a Climategate email where proxies curator Keith Briffa questioned MBH’s selections of tree rings proxies?
No, there are many more proxies now, and of many different types.
So why do they keep telling the MBH story?
Sounds like MBH got most things right.
My understanding is that Steve McIntyre has made very similar complaints.
Although he apparently now mostly “posts” on Twitter (which I actively avoid) his last article on the Climate Audit website touched on this issue.
At the end of the “The Decline, the Stick and The Trick – Part 1” post, dated the 2nd of November 2021, at https://climateaudit.org/ :
See also his reaction to the first “Subject to copy-editing” version of the AR6 WG-I report back in August of 2021.
Direct URL : https://climateaudit.org/2021/08/11/the-ipcc-ar6-hockeystick/
PS : See also Steve’s “Problems with PAGES2K” articles at the following link.
https://climateaudit.org/tag/pages2k/
Thanks for the links. I especially like this:
In a Climategate email. Keith Briffa famously sneered at Michael Mann’s claim that a temperature reconstruction could represent a hemisphere, including the tropics, by regressing a “few poorly temperature representative tropical series” against “any other target series” – even the trend of Mann’s own “self-opinionated verbiage” as follows:
People frequently say that the PAGES2K reconstruction has “vindicated” Mannian reconstructions – but neglect to mention that PAGES2K similarly regressed a “few poorly temperature representative tropical series” onto an increasing trend – thus, repeating, rather than vindicating, (one of) Mann’s erroneous methodologies.
Briffa’s distaste for Mann carried into AR4.
Thanks Andy.
This was what I was putting to Nick.
It’s also very telling that reading the Climategate emails was what caused Judith Curry and many other recognized climate experts to reject Mann’s cabal in disgust.
“Funny how Mann’s HS was so bad and yet people keep replicating it.”
The Hockey Stick shape comes from the bastardized surface temperature record, not Mann’s proxies.
The real instrument-era temperature profile would show the Early Twentieth Century to be just as warm as it is today. That’s what all the actual data shows. The computer-generated Hockey Stick is a distortion of this temperature profile.
It was just as warm in Australia in the recent past (instrument era) as it is today. How do you get a “hotter and hotter and hotter” Hockey Stick profile out of this benign temperature profile below?:
resize=640%2C542
I never get any answers from the climate alarmists when I ask this question.
I think that is because they can’t answer the question.
You can’t believe in a Hockey Stick chart if you see the actual temperature data in unmodified form. The unmodified temperature data does not show a Hockey Stick “hotter and hotter and hotter” temperature profile. Rather, they show it was just as warm in the recent past as it is today, and there is no unprecedented warming. This also shows that CO2 is a minor, insignificant player in the Earth’s atmophere.
The actual temperature data shows the Climate Alarmists are making things up out of thin air when it comes to temperatures.
Just like how they will never answer this simple question:
What is the optimum CO2 concentration level in Earth’s atmosphere?
Kind of like eating vomit and reaching it back up when you see how bad it tastes.
From the article: ” The first IPCC Physical Science Basis report is called “FAR” and was first published in 1990. An updated 1992 version of the report contains this statement:
“global-mean surface air temperature has increased by 0.3°C to 0.6°C over the last 100 years … The size of this warming is broadly consistent with predictions of climate models, but it is also of the same magnitude as natural climate variability. … The unequivocal detection of the enhanced greenhouse effect from observations is not likely for a decade or more.”
(IPCC, 1992, p. 6).”
This is an honest report on the state of the art at the time.
Subsequent reports engaged in rampant unsubstantiated assumptions and assertions about CO2 and the Earth’s climate, going from “we don’t know” to “everything is caused by CO2” without having any evidence that this is the case. It’s pure speculation passed off as facts.
This first report is the true state of the art, and it is just as true today as it was in 1992. There is no unequivocal detection of the enhanced greenhouse effect from observations to this very day, 31 years later.
There is no evidence humans are causing any changes in the Earth’s climate by burning fossil fuels and natural gas.
Our societies have been hugely misled about CO2 and the state of the science. There is no evidence CO2 needs to be curtailed or regulated or restricted in any way. The CO2 policy is not based on facts.
Pielkie has recently used some very strong language regarding the craptacular “synthesis” report, regarding some data it synthesized.
He still believes the world might end, just not today.
Remains a little bit pregnant.