by Judith Curry
An insightful interview with Bjorn Stevens.
Frank Bosse provided this Google translation of an interview published in Der Spiegel -Print-Issue 13/2019, p. 99-101. March 22, 2019
Excerpts provided below, with some minor editing of the translation.
begin quote:
Global warming forecasts are still surprisingly inaccurate. Supercomputers and artificial intelligence should help. By Johann Grolle
It’s a simple number, but it will determine the fate of this planet. It’s easy to describe, but tricky to calculate. The researchers call them “climate sensitivity”.
It indicates how much the average temperature on Earth warms up when the concentration of greenhouse gases in the atmosphere doubles. Back in the 1970s, it was determined using primitive computer models. The researchers came to the conclusion that their value is likely somewhere between 1.5 and 4.5 degrees.
This result has not changed until today, about 40 years later. And that’s exactly the problem.
The computational power of computers has risen many millions of dollars, but the prediction of global warming is as imprecise as ever. “It is deeply frustrating,” says Bjorn Stevens of the Hamburg Max Planck Institute for Meteorology.
For more than 20 years he has been researching in the field of climate modeling. It is not easy to convey this failure to the public. Stevens wants to be honest, he does not want to cover up any problems. Nevertheless, he does not want people to think that the latest decades of climate research have been in vain.
“The accuracy of the predictions has not improved, but our confidence in them has grown,” he says. The researchers have examined everything that might counteract global warming. “Now we are sure: she is coming.”
As a decision-making aid in the construction of dykes and drainage channels the climate models are unsuitable. “Our computers do not even predict with certainty whether the glaciers in the Alps will increase or decrease,” explains Stevens.
The difficulties he and his fellow researchers face can be summed up in one word: clouds. The mountains of water vapor slowly moving across the sky are the bane of all climate researchers.
First of all, it is the enormous diversity of its manifestations that makes clouds so unpredictable. Each of these types of clouds has a different effect on the climate. And above all: they have a strong effect.
Simulating natural processes in the computer is always particularly sensitive when small causes produce great effects. For no other factor in the climatic events, this is as true as for the clouds. If the fractional coverage of low-level clouds fell by only four percentage points, it would suddenly be two degrees warmer worldwide. The overall temperature effect, which was considered just acceptable in the Paris Agreement, is thus caused by four percentage points of clouds – no wonder that binding predictions are not easy to make.
In addition, the formation of clouds depends heavily on the local conditions. But even the most modern climate models, which indeed map the entire planet, are still blind to such small-scale processes.
Scientists’ model calculations have become more and more complex over the past 50 years, but the principle has remained the same. Researchers are programming the earth as faithfully as possible into their computers and specifying how much the sun shines in which region of the world. Then they look how the temperature on their model earth adjusts itself.
The large-scale climatic events are well represented by climate models.
However, problems are caused by the small-scale details: the air turbulence above the sea surface, for example, or the wake vortices that leave mountains in the passing fronts. Above all, the clouds: The researchers can not evaporate the water in their models, rise and condense, as it does in reality. You have to make do with more or less plausible rules of thumb.
“Parametrization” is the name of the procedure, but the researchers know that, in reality, this is the name of a chronic disease that has affected all of their climate models. Often, different parameterizations deliver drastically divergent results. Arctic temperatures, for example, are sometimes more than ten degrees apart in the various models. This makes any forecast of ice cover seem like mere reading of tea leaves.
“We need a new strategy,” says Stevens. He sees himself as obliged to give better decision support to a society threatened by climate change. “We need new ideas,” says Tapio Schneider from Caltech in Pasadena, California.
The Hamburg Max Planck researcher has therefore turned to another type of cloud, the cumulonimbus. These are mighty thunderclouds, which at times, dark and threatening, rise higher than any mountain range to the edge of the stratosphere.
Although this type of cloud has a comparatively small influence on the average temperature of the earth, Stevens explains. Because they reflect about as much solar radiation into space as they hold on the other hand from the earth radiated heat. But cumulonimbus clouds are also an important climatic factor. Because these clouds transport energy. If their number or their distribution changes, this can contribute to the displacement of large weather systems or entire climatic zones.
Above all, one feature makes Stevens’ powerfully spectacular cumulonimbus clouds interesting: They are dominated by powerful convection currents that swirl generously enough to be predictable for modern supercomputers. The researcher has high hopes for a new generation of climate models that are currently being launched.
While most of its predecessors put a grid with a resolution of about one hundred kilometers over the ground for calculations, these new models have reduced the mesh size to five or even fewer kilometers. To test their reliability, Stevens, together with colleagues in Japan and the US, carried out a first comparison simulation.
It turned out that these models represent the tropical storm systems quite well. It therefore seems that this critical part of the climate change process will be more predictable in the future. However, the simulated period was initially only 40 days. “Stevens knows that to portray climate change, he has to run the models for 40 years. Until then it is still a long way.
Stevens, meanwhile, rather fears that it is the cumulonimbus clouds that could unexpectedly cause surprises. Tropical storm systems are notorious for their unpredictability. “The monsoon, for example, could be prone to sudden changes,” he says.
It is possible that the calculations of the fine-mesh computer models allowed to predict such climate surprises early. “But it is also conceivable that there are basically unpredictable climatic phenomena,” says Stevens. “Then we can still simulate so exactly and still not come to any reliable predictions.”
That’s the worst of all possibilities. Because then mankind continues to steer into the unknown.
end quote.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
“But it is also conceivable that there are basically unpredictable climatic phenomena”
Random Climatic Mutation?
It’s not that complicated. The complications are generally only along the path to a steady state and have little to do with what that steady state will be. The are cited only to provide the wiggle room to make the impossible seem plausible.
If ‘e’ is the ratio between the planets average emissions (240 W/m^2) and the average surface emissions (390 W/m^2) corresponding to an average temperature, T (288K) , the sensitivity is given exactly by, 1/(4*e*o*T^3), where o is the SB constant of about 5.67E-8 W/m^2 per K^4. There’s no wiggle room for it to be anything else unless either COE or the SB LAW are violated. The consensus breaks COE by misapplying feedback analysis which literally creates energy out of thin air.
Yes, the Earth is not an ideal BB, but a non ideal BB can be exactly quantified as a gray body with non unit emissivity (e). There are no other laws of physics that can quantify how a radiating body like the Earth must behave in response to new energy input (forcing) and feedback is not new energy as the models seem to think.
co2isnotevil you could contact Will Happer and offer some of your time as a volunteer.
happer@Princeton.EDU
asarchi@Princeton.EDU cc
We’ve known that the climate is chaotic since the first computer models.
“highly sensitive to initial conditions”, that’s the butterfly effect. Lorenz discovered and described it. Basically, it is impossible to predict the climate with any precision.
So, we know the climate can’t be modeled with any precision, no matter how powerful the computer. Still, they throw money at the problem. MOB’s irreducibly simple model solved on a slide rule is just as valid as the latest finite element model solved on a super computer.
This isn’t a secret nor is it arcane, but it is the elephant in the room.
Spontaneous Climatic Variation http://eaps4.mit.edu/research/Lorenz/Chaos_spontaneous_greenhouse_1991.pdf
Just because we can’t solve a problem doesn’t mean it isn’t solvable. Our inability to predict process shouldn’t lead us to believe that the process isn’t predictable. Unless that is, it can be shown that the laws of cause and effect don’t apply in a particular case! Think I’d rather opt for our ignorance/inability as the explanation, because causelessness as an explanatory mechanism is the ultimate opt out – at any level, quantum included – and makes no sense at all to me. Climate is “simply” an exceedingly complex phenomenon and that complexity is beyond our current capacity to fully grasp and predict. Chaotic is how it seems, not how it is. We need to work harder and smarter is all.
“The accuracy of the predictions has not improved, but our confidence in them has grown,” he says. The researchers have examined everything that might counteract global warming. “Now we are sure: she is coming.”
Seriously??? The accuracy hasn’t improved, but you’re more confident in their predictions? Delusional.
Everything? What about the consequences of utterly crap input data, especially the hopelessly inadequate historical data?
You beat me to it, Don. Did Bjorn Stevens even listen to himself?
Well, unless he meant that he is more confident than ever that the models are crap for predicting the future state of Earth’s climate, in which case a model is only useful for studying the interactions of the variables based on the built-in assumptions of a model. I can go with that.
“The accuracy of the predictions has not improved, but …**…”
** =
1.) …the climate conferences are much better.
2.) …my paycheck has really grown.
3.) …the girls studying climate today are prettier.
4.) …you should see my office.
Does mean they more confident in their inability to predict the climate.
1970, top of the line computer was an IBM System/360, now they spend $70 million on a computer and get the same result.
Similar to my stock market picking, or sports pools.
I do just as well randomly selecting as I do spending weeks looking at all the data…
Sound like my NCAA bracket….
Yes, at the end of every calendar year, when the MSM is even lazier than usual, it trots out the inevitable “this monkey predicted the stock market better than we did” articles.
Case in point:
https://www.forbes.com/sites/rickferri/2012/12/20/any-monkey-can-beat-the-market/#5680a87d630a
I once tried out a currency trading service. After my trial, with virtual money, they asked if I’d like to sign up.
I told them that I’d found a really, really good way to make money at this. All they had to do was watch whatever I did, and do exactly the opposite! Basically every trade I made lost me money.
Needless to say, I did not take up their offer.
While it is tempting to solely use the gambit of watching dumb people and doing the opposite, it can backfire. As they say, even a broken watch is correct twice a day.
But it can be a useful start. I remember 20 years ago discussing buying some bonds for my portfolio.
My friends laughed. “You get, what, 3%. Pffft…”
They went on to talk about Bre-X and how it was going up…
“Does mean they more confident in their inability to predict the climate.”
Certainly makes more sense when interpreted that way. 🙂
” Nevertheless, he does not want people to think that the latest decades of climate research have been in vain.”
I quit reading at that point. I am fully capable of thinking for myself completely outside what a climate modeler wants for me. All I need are facts/truth
This is the beating heart of the problem with the CAGW science flim flam and what the world is being subjected to with this Concensus B.S. What we need to “Think,” “Feel” about climate change is WHAT THEY WANT US TO KNOW, truth be damned.
Of course you have more confidence in results that have not improved in accuracy with bigger, faster and more expensive computers. How else would you explain the money expended to polish GIGO?
Congratulations, Don, you’re getting right into the heart of the matter. To utilize supercomputers and Artificial Intelligence you have to train the computer to think, and this is done by presenting examples of successful calculations in the general theme of interest. So what examples of successful climate change due to increasing CO2 do they have to present? Nothing! So you cannot utilize AI until you have successful examples to present. Period. You computer geeks don’t bother to argue this, I managed a high-tech research group and we examined and rejected AI because of training examples issues.
Apparently with the climate modelers, GIGO means “garbage in, _gospel_ out”…
Gonna have to steal that one…
Or rather GOGI; gospel in , garbage out.
An artificial intelligence is, at best, as good as the training set. Given what the carbon climatists have done torturing data, I doubt their training set would be very good. Similarly, their more conventional models are plagued by bad assumptions and avoidance of important details which are hard to simulate.
AI is not likely to help but make matters worse. Consider the following article:
https://www.bbc.co.uk/news/science-environment-47267081
AAAS: Machine learning ‘causing science crisis’
“Machine-learning techniques used by thousands of scientists to analyse data are producing results that are misleading and often completely wrong.
Dr Genevera Allen from Rice University in Houston said that the increased use of such systems was contributing to a “crisis in science”.
She warned scientists that if they didn’t improve their techniques they would be wasting both time and money. Her research was presented at the American Association for the Advancement of Science in Washington.
A growing amount of scientific research involves using machine learning software to analyse data that has already been collected. This happens across many subject areas ranging from biomedical research to astronomy. The data sets are very large and expensive.
But, according to Dr Allen, the answers they come up with are likely to be inaccurate or wrong because the software is identifying patterns that exist only in that data set and not the real world.#“Often these studies are not found out to be inaccurate until there’s another real big dataset that someone applies these techniques to and says ‘oh my goodness, the results of these two studies don’t overlap‘,” she said.
“There is general recognition of a reproducibility crisis in science right now. I would venture to argue that a huge part of that does come from the use of machine learning techniques in science.”
The “reproducibility crisis” in science refers to the alarming number of research results that are not repeated when another group of scientists tries the same experiment. It can mean that the initial results were wrong. One analysis suggested that up to 85% of all biomedical research carried out in the world is wasted effort….
“One analysis suggested that up to 85% of all biomedical research carried out in the world is wasted effort…”
I wonder if this analysis relies on machine learning software ◔̯◔
Exactly.
Required statement to stay in the climate cult.
Charitable reading suggests that we shouldn’t rush to judgement when interpreting someone’s words. They may have meant something slightly different from our interpretation, and in a case like this where it seems that the guy is bright and honest, and not using his first language (or it’s a translation), it seems unlikely that “delusional” is an appropriate response. It could be that we as readers are misjudging his intent. Might be worth getting an explanation for the apparent contradiction.
Interestingly, I have seen several calculations of ECS based on observations, and they are all about 2 or below. Perhaps the real problem is that they dont like the fact that the calculations keep on coming on on the low end of the 1.5-4.5 spectrum
The entire 1.5-4.5 spectrum can be falsified in many ways. The sensitivity factor used to arrive at this is 0.8C +/- 0.4C per W/m^2. Doubling CO2 is claimed to be equivalent to 3.7 W/m^2 of incremental solar forcing where 3.7 * 0.8 = 2.96 degrees C and is the origin of the middle of the CO2 ‘sensitivity’ range of 3C +/- 1.5C.
If the surface temperature increases by 0.4C per W/m^2 (the low end of the IPCC’s range), from 288K up to 288.4K, the NET emissions increase from 390.08 W/m^2 to 392.25 W/m^2 for an increase of 2.17 W/m^2 per W/m^2 of forcing. Currently, the 390.08 W/m^2 of average NET surface emissions are the result of 240 W/m^2 of average NET solar forcing where the LINEAR sensitivity metric is 390.08/240 = 1.625 W/m^2 of surface emissions per W/m^2 of forcing.
Since the Earth can’t possibly distinguish the next Joule of solar forcing from all the others, the next W/m^2 of forcing must also increase surface emissions by 1.62 W/m^2 which is less than the 2.17 W/m^2 required to satisfy the low end of the IPCC’s sensitivity range, thus falsifying the entire range. A second path to falsification is to show that the maximum possible linear sensitivity metric is 2 W/m^2 of surface emissions per W/m^2 of forcing which is also less than the 2.17 W/m^2 of surface emissions per W/m^2 of forcing required to support the low end of the IPCC’s ECS.
Expressing the sensitivity in terms of at temperature change per doubling CO2 is just an additional level of indirection, beyond expressing the ECS as a non linear temperature change per W/m^2 and that has no purpose other than to obfuscate the underlying linear sensitivity metric. The ONLY proper sensitivity metric is the undeniably linear metric of W/m^2 of surface emissions per W/m^2 of forcing. The IPCC can’t accept this as it undermines their reason to exist.
The LINEAR sensitivity metric for an ideal black body is 1 W/m^2 of emissions per W/m^2 of forcing. The Earth is not an ideal BB and rather than 100%, only about 62% of the average NET surface emissions eventually leave the planet which can be expressed equivalently as a gray body with an emissivity of about 0.62 and leads to an EXACT quantification of the sensitivity factor of 0.3C per W/m^2 which when multiplied by 3.7 W/m^2 of EQUIVALENT forcing results in a sensitivity to doubling CO2 of about 1.1C and less than the 1.5C claimed as the lower limit.
They assume that a CO2 driven temperature increase will drive an increase in water vapor, a very potent IR active gas. What they then neglect is an increase in cloud formation or any other factor driven that might counter the heating.
On the one hand, the alarmists claim that emissions of water vapor from combustion (including H2 combustion) are insignificant relative to the effects of the emitted CO2 from burning fossil fuels, yet on the other, they claim that the effect of water vapor is so large that a tiny increase in temperature resulting in a tiny increase in water vapor will amplify themselves into oblivion causing the world to fry. How does this nonsense get past peer review?
What they fail to understand is that the combined effect beyond the forcing from CO2, CH4, O3, water vapor, clouds and ice is only 620 mw per W/m^2 of forcing and that no amount of fabricated feedback or any other such nonsense will ramp this up to the 3.4 W/m^2 per W/m^2 of forcing required to support their insane ECS.
“That’s the worst of all possibilities. Because then mankind continues to steer into the unknown.”
As mankind has done forever. We are just along for the ride and we go where the planet takes us we can not and never will be steering.
Sometime in the 50’s Hollywood got the idea that somehow a group of politicians, informed by a group of greybearded scientists, and spearheaded by a hunk military scientist (aided by a young pretty female scientist and quirky sidekick engineer/technician/scientist) could save the World from alien invasion, radioactive monsters, or just some plain old nature disaster. This meme has only grown. When I was a kid I sat on the rug and watched these reruns, I didn’t have any idea of the vastness and complexity of the Earth but that meme, to some degree, was planted in my brain.
To paraphrase a quote attributed to LBJ: “Washington is like a turd covered with ants, floating down the Potomac — and every ant thinks it’s driving.” … No one is steering this climate turd and any claim to know what is really going on is just statistical hubris.
Climate models fail because there is zero warming from greenhouse gasses.
Earth’s temperatures are driven solely by changing levels of SO2 aerosols in the atmosphere, of both volcanic and anthropogenic origin, which affects the intensity of the sun’s radiation striking the Earth’s surface, and natural recovery from the Little Ice Age cooling (~ .05 deg. C/decade).
Your single controlling knob view of world climate as it is ridiculous as the climate being controlled by CO2.
Not =entirely= as ridiculous, SO2 aerosols at least have a demonstrated effect on global temperature.
Not “gasses”. CO2 only. They leave out water, the most important greenhouse gas.
I wonder what would happen if they created a model that left out CO2?
We know what happens when they create a model that leaves out the USA: 0.1-0.2 degrees C less warming by 2100. That’s what makes California’s climate change efforts so tragic; they represent a small fraction (reduction rather than elimination) of a small fraction (California, not the whole USA) of practically nothing, so it amounts to precisely bupkes (goat droppings).
“I wonder what would happen if they created a model that left out CO2?”
They have.l (Lacis et al 2010)
It did this ….
Unable to deal with water vapor realistically, they make the GROSS simplification that the effect is a function of temperature, and thus an amplification factor on CO2.
Climate sensitivity may be difficult to determine, but this isn’t — that a modeler will always promote models, whether they are worth anything or not. Because that’s where his salary is coming from….
Clouds are a part of the natural water cycle. They are powerfully intrinsic factors that affect land conditions and sea surface solar insolation. The effects of teensy tiny amounts of atmospheric anthropogenic CO2 are buried in the highly variable and wholly natural variations in clouds.
Case. Closed.
Any model will be useless when the premise upon which it is based is flawed, and when reliable validation data are non-existent.
The fallacy is in believing that humans can model an extremely complex planetary and astrophysical system of systems at all. We can’t. And probably never will. At least not accurately.
Also the notion of CO2 sensitivity is fallacious too – it assumes that it is just one thing – CO2 – that somehow controls the planetary thermostat. It completely fails to recognize that earthian climate is an extremely complex system of systems.
That kind of fallacious thinking is akin to thinking that the only thing that controls an automobile is the volumetric flow rate of gasoline to the engine … and that any two cars with equal gasoline flow rates will necessarily travel at the same speed down the road. Even that is a vast simplification, compared to the massively complex earthian system of systems that results in what we call climate. Yet is is obvious on its face that the flow of gasoline does not solely determine the speed of the vehicle.
Elsewise a 1974 Chevrolet Caprice would necessarily be a faster vehicle than a 2019 Ferarri 812 because it consumes more gas per mile.
Yes, accuracy is the key. Anything can be modeled.
Models might even be based on a complete physical understanding, mathematical and otherwise and still be unable to predict the near term future because of some random nature of the system. Of course climate modeling is close to an infantile endeavor at this point.
Wouldn’t hurt if the models were based on valid assumptions.
It’s like building a bicycle with square wheels and then wondering why it’s so difficult to ride, blaming the rider’s ability, the terrain, air temperature, air pressure in the tires, and everything but the design itself.
Like this trike?
https://www.youtube.com/watch?v=LgbWu8zJubo
😄
Faster, more powerful supercomputers just let them make failing predictions faster. Until they understand WHY their predictions are failing they will never be able to improve them.
~¿~
And even then it likely won’t matter, as they’re modeling a non-linear chaotic system, and therefore highly likely to be quite sensitive to initial conditions… and when the available input dataset is rife with errors and inprecise, inadequately sampled datapoints, there’s no way we can realistically predict what the temps will be two to four decades from now.
I just wish PhD’s had learned how to write English in grade school particularly to edit translations:
“For no other factor in the climatic events, this is as true as for the clouds.” An aimless sentence with no subject.
“This result has not changed until today, about 40 years later.” What has the climate sensitivity change to?
“The computational power of computers has risen many millions of dollars, but the prediction of global warming is as imprecise as ever.” A user of computers would know that the cost of computing power has decreased something like 10 orders of magnitude over the last century.
“Simulating natural processes in the computer is always particularly sensitive when small causes produce great effects.” The misunderstood problem is that computers cannot calculate very small, but significant effects. The problem is machine epsilon- the smallest difference between two numbers that can be calculated. Many ways to minimize the effects of machine epsilon have been discovered, but for climate models the bottom line is that many calculations fall below that level and can’t be reliably calculated.
This was stated to be a Google translation – in other words, a computer did it. Not PhDs.
It is Google. “Now we are sure: she is coming.”
Most likely, the German was sie, and should have been translated as “it”, not “she”. But idiom is hard.
Of course PhDs can emit bafflegab in any language. And German has a noted tendency toward that.
I recall the story of a scholar who came to the US as a refugee from the Nazis. He never again wrote or published in German. When asked why, he said that he had discovered that it is easy to write high flown nonsense in German, but that he found that very difficult to do in English.
Of course, the contemporary American academy has thousands of “scholars” who can write dreadful nonsense in English. But, it is easy to spot as nonsense.
Ignore the variability in main driver (almost of everything) and readjust the data to suit the large changes in, but insignificant in influence gas, namely the solar activity and carbon dioxide.
Solar activity picked up a bit in March. The ‘classic’ sunspot count (Wolf SSN) is just under 7 points while the new SIDC reconstructed number is at 9.5
Composite graph is here
SC24 has entered what might be the start of a prolong minimum (possible late start of SC25 too) but even a ‘dead cat bounce’ from these levels appear to be out of question.
What about SC25?
If my calculations are any good (the last time gave ‘incredibly’ accurate result 🙂 , see the link above), we have to estimate peak time of the next cycle. Assuming it occurs some time in 2025/26 the SC25 annual smoothed max will be in the low 50s in the old (Wolf) numbers while Dr. Svalgaard predicts much higher peak possibly around 100, or in the new corrected numbers somewhere in 140s.
The problem with computer simulations, old or new: GIGO….
As I said above, with the true believers of climate modeling, it’s “garbage in, _gospel_ out” ^_^
Apparently, they still know what snow is in Britain. https://www.telegraph.co.uk/news/2019/04/03/snow-causes-disruption-across-northern-britain-24-cars-crash/
shrnfr,
But, it appears, that they have forgotten how to drive in it.
Can Judith (or someone) confirm this : the global climate models that the AGW’ers use are basically the exact same computer models that our weather forecasters use, except they run for much longer on much bigger computers. And while the weather forecast for 24-48 hours is extremely accurate, once you get out to the weekly forecast, it’s no batter than 30% accurate (approx). If this is the case – then how can these people claim any accuracy in their 100 year models ?
I wouldn’t think so, daily and weekly forecasts do not take into account CO2 rise or fall over such short periods, while in the climate models it is the main ingredient.
“Now we are sure she is coming” in one paragraph and many other paragraphs about how unsure the models are. Sounds like someone I know who made a horse racing program to predict the winner of a given race…..
No they aren’t the same, most weather forecast models are based on expected similarities to previous weather patterns, similar to a police program that searches for fingerprints in the database looking for matches to today’s crime scene. Climate predictions are based on equations of physics, with fudge factor multipliers (paramaterization) to dampen out pesky variations that would cause either snowball Earth or hothouse Earth. One would expect the weather forecast to have a higher possibility of being correct because the likely extremum are known already from recent past averages.
The modelers will tell you that climate models are not like weather forecasting models. Climate models are modeling physical processes regardless of the initial conditions, while forecasting models are fundamentally dependant on initial conditions. Time is a vital component of weather models and an extremely important variable in the calculation of all the other variables for any given location. In climate models, time only impacts the CO2 variable, and is not a direct factor for the other variables. In other words, it is the change in CO2 that determines the other variables and the outcome, not the change in time.
While this is true, it is largely irrelevant. Climate models still using calculus to derive the other variables, and calculus is inherently flawed for use with non-linear, chaotic systems. As I said below… computer models are just not the right tool for climate prediction, although they might be better if we understood the science of climate change first, but we don’t!
So yes, the models are different. The climate models are worse than the weather models, which have proven their usefulness on a daily basis. The climate models have no observable skill, and are only useful as propaganda. Indeed, that is the only reason they are funded by politicians. The models must predict a climate crisis, or they will cease to be funded. They will have no use.
For the UK they have a unified model that does both climate and weather. Don’t know if that is what they are using for the IPCC.
When people talk about “forecasting the climate” they generally mean forecasting the average temperature. But there are other dimensions to the climate than just the average temperature. Why should anyone expect that you can forecast only one dimension without simultaneously forecasting all the other important dimensions?
Garbage in, garbage out. This is all that can and should be said about numerical “climate models.” Having more powerful supercomputers won’t help if the fundamental physics, chemistry, and biology in the models is junk. The parametrizations used are just ugly kludges[1], in plain language: cheating.
Here is what physics tells us: the in-depth analysis of vibration-rotation radiative transitions in atmospheric CO2 molecules yields “climate sensitivity” of 0.4K/doubling only at the present level of concentration[2]. As the concentration rises, the sensitivity drops, due to saturation. All the warming due to the anthropogenic CO2 injection into the atmosphere since 1880 has been a mere 0.02K–puny and practically undetectable against the natural centennial global temperature variability which is (0.98+/-0.27)K/century[3].
Amongst countless factors that the modellers ignore, often intentionally, but most commonly because they just don’t know enough themselves–for most, they’re second-rate physicists, chemists, biologists–is this interesting fact: boreal pine forests exude organic compounds into the atmosphere that help clouds form[4,5]. Which climate model takes this important driver into account? Which climate model takes Svensmark’s Effect[6] into account?
To make things worse, many of the kludges they use to mask their ignorance introduce unphysical effects into the solution, for example, the models end up violating the second principle of thermodynamics[7].
The simple truth about the numerical “climate models” is that they are no better than Mickey Mouse cartoons, and that their only function is to be used for warm-monger propaganda. They are tools of deception.
[1] https://doi.org/10.1175/BAMS-D-15-00135.1
[2] https://doi.org/10.1088/1361-6463/aabac6
[3] https://doi.org/10.1260/0958-305X.26.3.417
[4] https://doi.org/10.1038/nature13032
[5] https://doi.org/10.1038/NGEO1800
[6] https://doi.org/10.1038/s41467-017-02082-2
[7] https://doi.org/10.1002/qj.2404
There is a wise expression about getting things done: “Any job is easy if you have the right tool!”
It is very clear that numerical prediction models are simply not the right tool to predict climate change. Using them to predict climate change is like trying to use a saw to drive in a nail. No amount of fiddling with the saw will turn it into a good hammer.
Since the advent of the computer, we have been mesmerized by the versatility of the device; marveling at all the many ways the computer has enriched our lives. It has certainly been a wonderful tool for short-range weather forecasting, leading to the belief that it should also be useful for climate forecasting. But the equations for short-term atmospheric weather phenomena existed before the computer came on the scene. In other words, the science was previously understood and was later used to build the computer model. There are no equations for climate forecasting. Those equations do not exist. So the models are not built on understood science, but on unsupported assumptions. Until the science is developed, and the equations that represent that science are created, the computer models of climate will be worthless. They are simply the wrong tool for the job.
For now, pattern recognition is a far better tool. (What has happened in the past will tend to (generally) happen again in the future.) Pattern recognition, however, does not support a pending climate crisis. On the contrary, pattern recognition indicates that there is not enough CO2 in the air to support a truly healthy Earth biosphere. Pattern recognition reveals that the burning of fossil fuels and the restoration of CO2 to the atmosphere is one of the greatest gifts modern man could possibly leave to future generations of all species!
While the reality of increasing CO2 is a phenomenon that deserves an annual holiday of global celebration, it is not something that can be used to frighten people out of their money and freedom, or manipulate and control the population. Therefore, the reality of the science, the futility of the climate models and history of the Earth are all being hidden from the general population.
Last century the lunatic fringe often predicted that the “end is nigh”!
Sensible people ignored them.
This century the lunatics have finally taken over the asylum.
They are now represented in mainstream science and some have been elected to high office.
“Nos morituri te salutant!”
“Now we are sure she is coming” in one paragraph and many other paragraphs about how unsure the models are. Sounds like someone I know who made a horse racing program to predict the winner of a given race…..
“Now we are sure she is coming”
I was kinda wondering where the Metoo movement are on this. Like tropical cyclones nowadays don’t they take turn and turn about with the naming of these definite doomsday tipping points? Whatever happened to Zena and Zac I ask?
Google “IPCC non linear system”. One of the results will be this link :
IPCC – Intergovernmental Panel on Climate Change
http://www.ipcc.ch/ipccreports/tar/vol4/index.php?idp=106
“Given the complexity of the climate system and the inherent multi-decadal … The climate system is a coupled non-linear chaotic system, and therefore the …”
or this link :
The Scientific Basis – IPCC
https://www.ipcc.ch/ipccreports/tar/wg1/505.htm
In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction …
The well known end of the above sentences is : “… (long-term prediction) of future climate states is not possible.”
Click on the links above and the result is :
“PAGE NOT FOUND.”
Welcome to the IPCC, the Monty Pythonian and tragicomical farce of a total scientific failure.
+10… :<)
Do you think they are archived somewhere?
I have IPCC links that are not found either
Climate worriers (often with Dr. before their name) seem to have three postulates.
(1) Crisis carbon dioxide papers must be correct because they attract a lot of press and citations.
(2) The IPCC must be correct because they got a Nobel Prize.
(3) The necessary message cannot get out because of (conspiracy mass of) fossil fuel industry misinformation.
From https://www.pnas.org/content/115/33/8252 (Trajectories of the Earth System in the Anthropocene) “The Stabilized Earth trajectory requires deliberate management of humanity’s relationship with the rest of the Earth System if the world is to avoid crossing a planetary threshold…….Our initial analysis here needs to be underpinned by more in-depth, quantitative Earth System analysis and modeling studies to address three critical questions.” The first one kills the paper–“(i) Is humanity at risk for pushing the system across a planetary threshold and irreversibly down a Hothouse Earth pathway?” They already concluded that. “The impacts of a Hothouse Earth pathway on human societies would likely be massive, sometimes abrupt, and undoubtedly disruptive.”
I thought they had the answers. You don’t have to understand zilch about climate science to wonder how papers like this get published in supposed (pick your superlative) science journals. 16 authors makes it a committee, approved in a little over a month, so urgent, I suppose. As soon as I read the 88 citations, I will get back to you. So get scratched as a reviewer. Earth survival not that important.
Predictions based on NASA Newspeak are especially difficult.