Guest essay by Eric Worrall
The diverse predictions produced by 20 major research centres represent “strength in numbers”, according to UCL Professor of Earth System Science Mark Maslin.
Three reasons why climate change models are our best hope for understanding the future
Mark Maslin
Professor of Earth System Science, UCLIt’s a common argument among climate deniers: scientific models cannot predict the future, so why should we trust them to tell us how the climate will change?
…
Deniers often confuse the climate with weather when arguing that models are inherently inaccurate. Weather refers to the short-term conditions in the atmosphere at any given time. The climate, meanwhile, is the weather of a region averaged over several decades.
Weather predictions have got much more accurate over the last 40 years, but the chaotic nature of weather means they become unreliable beyond a week or so. Modelling climate change is much easier however, as you are dealing with long-term averages. For example, we know the weather will be warmer in summer and colder in winter.
…
Here’s a helpful comparison. It is impossible to predict at what age any particular person will die, but we can say with a high degree of confidence what the average life expectancy of a person will be in a particular country. And we can say with 100% confidence that they will die. Just as we can say with absolute certainty that putting greenhouses gases in the atmosphere warms the planet.
…
Strength in numbers
There are a huge range of climate models, from those attempting to understand specific mechanisms such as the behaviour of clouds, to general circulation models (GCM) that are used to predict the future climate of our planet.
There are over 20 major international research centres where teams of some of the smartest people in the world have built and run these GCMs which contain millions of lines of code representing the very latest understanding of the climate system. These models are continually tested against historic and palaeoclimate data (this refers to climate data from well before direct measurements, like the last ice age), as well as individual climate events such as large volcanic eruptions to make sure they reconstruct the climate, which they do extremely well.
No single model should ever be considered complete as they represent a very complex global climate system. But having so many different models constructed and calibrated independently means that scientists can be confident when the models agree.
…
Errors about error
Given the climate is such a complicated system, you might reasonably ask how scientists address potential sources of error, especially when modelling the climate over hundreds of years.
…
We scientists are very aware that models are simplifications of a complex world. But by having so many different models, built by different groups of experts, we can be more certain of the results they produce. All the models show the same thing: put greenhouses gases into the atmosphere and the world warms up. We represent the potential errors by showing the range of warming produced by all the models for each scenario.
…
Read more: https://theconversation.com/three-reasons-why-climate-change-models-are-our-best-hope-for-understanding-the-future-175936
I have a few problems with these arguments:
- Comparing climate models to life expectancy models in my opinion is a false comparison.
Life expectancy models are constructed from millions of independent observations, medical records vs time of death. By contrast, climate scientists struggle to reconstruct what happened yesterday. There is a significant divergence between temperature reconstructions of the last 30 years, let alone climate projections.
(source Wood for Trees)
- “Millions of lines of code” are not a source of confidence. Millions of lines of code are millions of opportunities to stuff up. As a software developer I’ve worked with physicists and mathematicians. They all think they know how to code, but with very few exceptions they wrote dreadful code.
The problem I saw over and over was that mathematics and physics training creates an irresistible inner compulsion to reduce everything to the simplest possible expression, even when such reduction means ditching software best practices designed to minimise the risk of serious errors. I knew what to expect well before I read Climategate’s “Harry Read Me“. - If the climate models were fit for purpose, scientists would only need one unified model. The fact there are many diverse models is itself evidence climate scientists are struggling to get it right. Compare this plethora of climate models to say models used to predict the motion of satellites. If satellite orbital predictions were as uncertain as climate projections, it would not be possible to create a global position system which can tell you where you are on the Earth’s surface to within a few feet.
- Climate models may contain major physics errors. Lord Monckton, Willie Soon, David Legates and William Briggs created a peer reviewed “irreducibly simple climate model“, which appears to demonstrate that most mainstream climate scientists use a grossly defective climate feedback model.
… In official climatology, feedback not only accounts for up to 90% of total warming but also for up to 90% of the uncertainty in how much warming there will be. How settled is “settled science”, when after 40 years and trillions spent, the modelers still cannot constrain that vast interval? IPCC’s lower bound is 1.5 K Charney sensitivity; the CMIP5 models’ upper bound is 4.7 K. The usual suspects have no idea how much warming there is going to be.
My co-authors and I beg to differ. Feedback is not the big enchilada. Official climatology has – as far as we can discover – entirely neglected a central truth. That truth is that whatever feedback processes are present in the climate at any given moment must necessarily respond not merely to changes in the pre-existing temperature: they must respond to the entire reference temperature obtaining at that moment, specifically including the emission temperature that would be present even in the absence of any non-condensing greenhouse gases or of any feedbacks. …
Read more: https://wattsupwiththat.com/2019/06/08/feedback-is-not-the-big-enchilada/
Lord Monckton’s point is, since feedback is a function of temperature, feedback processes can’t tell the difference between greenhouse warming and the initial starting temperature, all they see is the total temperature. You have to include the initial starting temperature alongside any greenhouse warming when calculating total feedback, you can’t just use the the change in temperature caused by adding CO2 to the atmosphere. Making this correction dramatically reduces estimated climate sensitivity, slashes future projections of global warming, and removes the need to panic about anthropogenic CO2. - Cloud error. As Dr. Roy Spencer explains in a 2007 paper which supports Richard Lindzen’s Iris Hypothesis, clouds are potentially a very significant player in future climate change. Yet as scientists sometimes admit, climate models do a terrible job of explaining cloud behaviour. If climate models can’t explain major processes which contribute to global surface temperature, they are not ready to be used as a serious guide to future surface temperature.
Why are climate scientists so keen to have models accepted, why do they seem so ready to gloss over the shortcomings? The following quote from a Climategate email provides an important hint as to what might have gone wrong;
… K Hutter added that politicians accused scientists of a high signal to noise ratio; scientists must make sure that they come up with stronger signals. The time-frame for science and politics is very different; politicians need instant information, but scientific results take a long time …
Source: Climategate Email 0700.txt
In my opinion, political paymasters demanded certainty, so certainty is what they got.
Science needs people like Mark Maslin, who are confident and willing to defend their positions and models.
I’m not suggesting Mark Maslin is in any way following the money or acting in a way which is contrary to his conscience. If there is one thing which comes through very clearly in the Climategate emails, that is that the climate scientists who wrote them are utterly sincere.
What in my opinion broke climate science is the other side of this equation was all but eliminated. What I am suggesting is climate scientists who were not confident in their models and their projections mostly got defunded, via a politically driven brutal Darwinian selection process which weeded out almost everyone who wasn’t “certain”.
We can still see this happening today. Climate scientists who support politically approved narratives receive lavish funding, while those like Peter Ridd who question official narratives, not so much.
I’m not against climate models as such, I believe there is a chance, though not a certainty, that eventually we shall have a comprehensive model of climate change which can produce worthwhile projections of future climate. What I dispute is that most current climate models which tend to run way too hot are fit for purpose. In my opinion, climate models should be regarded as a work in progress, not an instrument which is useful for advising government policy.
Correction (EW): Fixed the title in the quoted article.
Correction (EW): h/t Climate believer – fixed a typo.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
“Modelling climate change is much easier” than Weather … that’s the funniest thing I’ve heard all day.
This lie gets brought up every few years, by those who are incompetent, liars, or both.
It is a truth, not a lie. If you botch tomorrow’s weather prediction, everybody will know tomorrow. If you botch a year 2100 climate prediction, nobody will know in your lifetime.
And if you predict climate, once, perhaps twice, no one can know if it is science or fortune. Here’s to semi-stable conditions with a large variance.
Climate Science is exactly like Astrology you don’t have to completely understand anything you just make up general statements that the target audience can perceive as true. Ask Griff 3% wetter in UK somehow means full world global warming yet if you pushed him what the current blizzards in UK means it would still be full world global warming. Science and common sense does not come into it only belief.
What UK blizzards? Up until yesterday we’ve had an incredibly quiet and uneventful January here. We’ve basically had no weather.
But in the next sentence he goes on to say that climate is the average of weather. In other words, he admits that modelling weather is all but impossible more than a couple of weeks out, but that somehow if you average weather decades out, that is more reliable.
Averaging, averaging, averaging, averaging, … averaging -> infinity.
Don’t you know this will always give the correct answer!
If all 20 centers of “expert climate science” modeling rely upon the same false assumptions, then there is no safety in numbers of GIGO computer games.
“Modelling climate change is much easier” than Weather
…
Either he does not understand or know anything about modelling OR he does not understand or know anything about climate OR he does not understand or know anything about weather OR any combination of these three assertions.
Also, Mark Maslin, Professor of Earth System Science, says weather systems are chaotic and difficult to predict, however climate is easier because it involves averages over decades, then cites the difference in WEATHER between summer and winter, a one year time period, not decades, as evidence.
Using “averages” is an excuse to ignore chaotic. No model can be trusted to predict climatic changes, because none can replicate past changes.
Read my comment below. I hope it is at least as funny.
I am not comprehending why this got so many up votes.
Please note I’m not arguing the assertion is true, just that I think it’s a tougher question than some suggest.
The two disciplines, while sharing some obvious commonality, and more different than some realize. Climate models have to deal with time frames measured in decades while weather models have time granularity measured in days and weeks. That doesn’t make either inherently harder or easier than the other. A weather model that predicts a foot of snow on Sunday is a complete failure if a foot of snow comes on Wednesday, but a climate model could be off by years end deemed to have “nailed it”.
I’d be curious to know how many people are employed full-time in the creation of weather models compared to climate models.
The aggregation of Climate Models seems to be less useful and less accurate than the Old Farmer’s Almanac. Weather forecasts are at their best up to 3 days, but Climate Models are always wrong as they are produced as advertising vehicles for ‘Global Warming’.
At least the Farmer’s Almanac is based on experiential data rather than imagined, simplified, childish models based on mythical “science.”
“But having so many different models constructed and calibrated independently means that scientists can be confidentwhen the models agree.” They have been calibrated? Against what? What was the start point?
Not only are they not calibrated (in the true sense of the word) they don’t agree either. Thus, there is no basis for the confidence claim.
Near as i can figure, they are saying different random processes coming to the same conclusion means something
How different processes modeling different aspects of the question can come to the same answer is ridiculous?
Note the graphs (not shown) of: 1) “spaghetti” graphs average of global temperature anomalies; and 2) those of the actual temperatures. Graphs in 2, with its divergence of 3C between the models, shows that CliSciFi models are not using the same physics calculations. Model divergence in the “spaghetti” graphs in 1 does not represent statistical variation of a common physical parameter; they are all different speculations as to the linear relationship of global temperature to atmospheric CO2 concentrations.
The Russian model appears to be the most realistic, but no one (here) wants anyone to know about it.
It’s been mentioned here- great interview of Pat Michaels on Fox where he mentions the Russian model. https://www.foxnews.com/transcript/dr-patrick-michaels-on-the-truth-about-global-warming
after watching that, I proudly became a “denier” :-}
I have a model that predicts global warming it’s called a coin and we toss heads/tails.
So I tossed the coin and I got heads so global warming is real .. see models are accurate 🙂
“The climate models all agree….the observed data is wrong. “ not mine but still funny. I think it was a NASA scientist. I want to say Roy Spencer but not sure.
Certainly not Roy.
Lorenz’ criterion for validity is that the model be constructed using physics and starting conditions. If the output then accurately matches historical data, it might be valid. Once you start to tune the model (which is standard with climate models), it isn’t valid at all.
There is a wonderful article, Beware of von Neumann’s Elephants, written by a modeler for other modelers.
Riiight, but somehow climate modelers can model much more complex processes.
There are people who deeply care if their models correspond to reality, and then there are climate modelers, pretty much out in left field all by themselves.
Why do we need so many models 😉
Lots of climate scientists need to get there snouts in the money trough.
and non climate scientists like the large majority of the supposed 97% consenus
So 2 wrong models are somehow correct because they come up with the same wrong answer differently. No logic to that argument at all.
As far as I know, the only computer climate model that is fairly accurate is the Russian model. Multiple bad models do not act like multiple measurements of the same thing, but act like a perverse game of Battleship, where one is determining where the target is not.
So far, none of the climate models have managed to sink the Battleship.
The Russians think the Western climate panic is a big joke. In 2014 during the G20 in Brisbane, Australia, a flotilla of nuclear warships was parked just outside Australian territorial waters. When asked to explain, the Russian embassy replied they were doing global warming research, though they had a secondary purpose providing protection for President Putin.
Two of the four Russian ships were possibly armed with nuclear weapons, but none was nuclear-powered.
https://en.m.wikipedia.org/wiki/Russian_cruiser_Varyag_(1983)
https://en.m.wikipedia.org/wiki/Russian_destroyer_Marshal_Shaposhnikov
Russia is however planning to build a class of nuclear-powered cruisers to follow on the Slava class.
The Russian navy’s only currently operational nuclear-powered surface combatant is battlecruiser Great Peter, flagship of the Northern Fleet. One of her sister ships is being refit. The other two are being scrapped.
The three Slava-class cruisers serve as flagships of the Baltic, Black Sea and Pacific Fleets, roles for which the four nuclear battlecruisers were intended.
https://en.m.wikipedia.org/wiki/Kirov-class_battlecruiser
So Russia has more nuclear powered icebreakers than battleships.
They know the real enemy….
Good point, but I don’t know if nuclear powered ice breaker Lenin is still in service or not.
“The Russians think the Western climate panic is a big joke”
Even if they think it’ll warm – they can’t wait given how frigid most of their nation is. And, they can’t wait for the arctic to thaw giving them the ports they want.
The Chinese ones are not too bad either.
Its going to be impossible for COPpers to convince large CO2 emitters that they need to ignore their own models which are reasonably decent, and make policy based on hysterical models which most of the world seems enamored with.
Is it not a red flag for you, that the models from the most secretive and dishonest major government, are the ones that differs from all the rest?
Honestly, which models tell the truth? Let us into the secret.
Which countries are you talking about? It used to be the western democracies have at least some level of honesty and openness but I fear that time has past.
Or maybe you just want to believe what Putin has to say. Wait, now who was the US president who said we should believe Putin?
Is it not red flag for you that only the model from the most secretive and dishonest major government is the only ones that comes even close to matching the real world data?
As to government being secretive and dishonest, I thought you liked that?
“Is it not red flag for you that only the model from the most secretive and dishonest major government is the only ones that comes even close to matching the real world data?”
But that’ not true is it Mark…
What make you think Putin has any influence on Russian climate models?
How can it be, that after half a century of building models, they still cannot get an outcome that matches the temperature actually collected over that half century.. Surely, the FIRST test of a model would be to accurately predict each and every year from 1970 until now, based on data prior to the predicted date.. It would seem, from the chart, that ALL the IPCC models put forward are hopeless at even predicting temps that can be checked with actual data….for OVER 50 YEARS !!! Any model worth it’s salt should be able to “predict” 1970-2022 temps based on prior data.. Surely the modelers MUST be asking themselves the question.. Why is my model running so much hotter that we have actually observed for the last 50+ years ?? Where are we going wrong ??
Lord Monckton’s paper which I quoted provides a possible explanation. There is a gap between the surface temperature we should have and the measured surface temperature. Lord Monckton and the other authors contend that most other climate scientists made a fundamental error when calculating how to fill that gap, an error which consistently causes their models to run hot.
I have 2 words for the model output utility–Pat Frank. This modeler either has not reviewed Pat’s work or doesn’t accept or understand it. The initial error conditions are propagated into meaningless output in just a few iterations of the model calculations.
The climate scientists all assume that errors in initial conditions or in the differential equations all cause a random walk from iteration to iteration so all the errors cancel. Just like they assume that all measurement uncertainty is totally random with no systematic component so all the random errors cancel out. Therefore all their outputs have no error, they are 100% accurate.
It’s the only way they can justify their predictions being usable to the hundreths digit over a 100 year timeframe.
Eric,
Professor Garth Paltridge addresses this point in “The Climate Caper,Facts and Fallacies of Global Warming”-
…The point to all this(climate models), known at least subconsciously by almost everyone in the climate business,is that the fundamental problem of climate physics remains that of defining the minimum space and time scales over which it may be possible to predict changes in climate when somebody clouts the Earth over the head with a sort of climatic bat.
We mentioned at the beginning of the chapter that there is a fair amount of reasonable science behind the global warming debate.
True enough, but ‘reasonable’ is a relative term, and it has to be said that the typical climate model of today has great difficulty calculating even the present day global average temperature.
The twenty or so (now some 86) models that have some respectability (by virtue of the fact that they figure largely in the IPCC deliberations) calculate global average temperatures that range several degrees about the observed value of 15C.
Their simulations of the broad DISTRIBUTION of present day temperature are only so-so.
And as might be expected from the earlier discussion, their simulations of the broad distribution of other parameters such as rainfall can only be described as terrible.….
The problem is that it is seems almost impossible to imagine ways in which broad overall laws can be applied to the present generation of climate models that are designed specifically to do their calculations on the small and so-called local ‘scale’.
The bottom line may be a cultural one.
The climatology profession is reluctant to give up on the hope that detailed forecasts are possible in principle.
The prospect of having to put up with only the broadest averages is too difficult to countenance.”
So they officially lie to support a narrative that dooms much of the world to poverty, by transferring great wealth and power to a chosen few. Does that sound like how science is supposed to benefit Mankind?
That their models run hot as has (belatedly) been acknowledged by their “Pope,” Gavin Schmidt of NASA.
Not to mention that no models anywhere have ever predicted 18 year pauses in global average temperature warming.
Even the CliSciFi modelers admitted a pause of 15 to 17 years would falsify their models.
There is certainly an argument to be made that the technology is driving the science. In this review they found that there has been an order of magnitude increase use in aquatic ecology in the last two decades. While many of these parameters are easier to measure than clouds they deal with more than physics and chemistry. Open access, one quote–“Training of the next generation of researchers should include cross-fertilization and development of skills in both observational and modeling techniques. ”
Ganju, N.K., and 13 other authors. 2016. Progress and challenges in coupled hydrodynamic– ecological estuarine modeling. Estuaries and Coasts 39(2):311–332.
https://doi.org/10.1007/s12237-015-0011-y
In my world, models are a wonderful design tool. If everything goes correctly, you design the circuit using model based software. You build the circuit. You don’t have to tweak it even a little bit. That’s because the model has thoroughly been tested against reality. The whole process of making sure models work right is called verification and validation.
There’s no way climate models can be properly verified and validated. ‘Those people’ make up bs to pretend they can.
Anyone who pretends that climate models are valid is demonstrating that everything is easy if you don’t know what you’re talking about.
Do not forget that they also use climate data that has been dishonestly altered to show warming, with the past being cooled and the recent warmed. Their base data is corrupt as well as their simplified and ingenuous assumptions they call science.
I’ve never created a circuit model that accounted for all the stray capacitance and inductance in the actual physical circuit. That may not be a big deal at 3-5Mhz but it gets to be a bigger deal at 100Mhz. Kind of like a climate model predicting out 3-5 years vs 80-100years!
My students used Microwave Office with great success. The link is for a helical resonator but my students would typically design impedance matched 1 GHz pcb amplifiers. The software would account for all the metal, ie. devices, pcb traces, case, and lid. Given the varying parameters of individual transistors, a bit of tuning might produce better results, but the designs usually worked well when they were built.
Actually, modeling HF circuits is child’s play compared with what it would take to create a valid model of Earth’s climate.
We didn’t have tools like this when I was at university. In fact we didn’t have tools like that when I worked for the telephone company. Most of the circuit design was done by hand and slide rule/calculator with estimates of strays. Then it was cut to fit.
The climate models are essentially y = mx + b equations. That’s no way to model the Earth’s climate.
CFD, and even worse MHD models are notoriously difficult. Any firm, (Auto…) that tried to avoid real test-runs paid a high price. Result – plenty of HPC, and plenty of wind tunnels. MHD as in Wendelstein 7X must be tested with real runs.
Taking the average of a bunch of bad data points gives you a good answer? Models are totally different, but they all give the same answer? So many models because there is so much money. We validated our models with empirical data thru testing and adjusted them as necessary. Note: the models not the data.
Yes, because you are dealing with averages, you can ignore some of the basic chaotic nature of weather.
However you add many new complexities that weather forecasts can ignore.
I’ll bring up the 5 spheres again.
atmosphere
hydrosphere
cryosphere
lithosphere
biosphere
All of the spheres interact with each other and few of these interactions are will understood.
Until you can accurately model all of these interactions, predicting long term climate is simply impossible.
For example. let’s say global warming actually does increase rainfall.
Increased rainfall will impact the biosphere. Some plants will die off, new plants will thrive.
Changing flora will change the types of animals that thrive, first herbivores, then their predator’s. changes in fauna will in turn impact the flora again.
Increased rainfall will also impact the lithosphere, More rainfall will increase erosion, both soil being washed away and rocks themselves being eroded. Increased erosion will change the chemical make up of streams, lakes and the ocean itself. Changing chemistry will impact both the types of plants and animals that live in water.
Any change in one sphere will impact the other spheres. Each of these changes will in turn impact the spheres, and so on to infinity.
Every day we are reading papers about new and previously unknown linkages between plants, animals, water and earth.
The authors claim that modeling climate is easier than modeling weather. Nothing could be further from the truth. There is probably more chaos in climate than there is in weather.
How many models take Milankovitch cycles into accounr?
None, because the over the time periods being discussed it doesn’t change enough to matter.
Freeman Dyson pointed this out years ago. it was his biggest complaint with climate models focusing soley on CO2. He said any study of climate needed to be a holistic one, considering *all* facets of the biosphere. Anything less risks changes in one part of the biosphere wreaking havoc in other parts of the biosphere.
What your analysis shows is that climate equilibrium is a myth.
“Modeling climate is easier than modeling weather.”
I agree.
The key reason is there are no penalties for being wrong on climate.
But be wrong on weather modeling (10 days out) and you get quietly ridiculed by peers and your ass is handed to you by better weather models, like the Euro models beating NOAA’s almost all the time.
In fact, the incentives are for being wrong on the hot side on climate.
And as Lord Monckton of Brenchley has pointed out frequently here on WUWT posts, simple linear models of CO2 do as well as multi-million dollar super computer model outputs.
So climate models could effectively by y = mx+b on graph paper and be something an 8th grader could do with the same accuracy as NOAA’s teams of PhDs and supercomputers spending 10’s of millions oif dollars every year.
Joel,
Your list of problems with GCMs is getting pretty lengthy. Allow me to summarize some of the others for the benefit of WUWT readers:
From “The Problem with Climate Models” –
“Convergence. It is a really a lack of convergence, better seen as a divergence. The more models produced by the Climate Dowsing community, the wider the ECS upper and lower bound estimate gets; rather than a convergence as n increases.”
“(T)hey are iterative input value error propagation machines that produce statistical error that quickly overwhelms any conclusions in the output that can be drawn. This is the Pat Frank-developed problem with the GCMs.”
“Another major problem with too hot running climate models, at least in their current implementations, is they all predict the mid-tropospheric tropical hot spot. No one in the climate modeling community wants to talk about this elephant in the room anymore.”
They’ve recently put the “pea” under the “cloud thimble,” thinking it will take years to catch on to the similar scam of using the “aerosol thimble” as they did for decades. They will all be rich or comfortably retired by then.
“Modelling climate change is much easier” than Weather.
Please, any pro AGW commenter, provide one pro AGW model that correctly predicted the actual climate between 2010 and 2022.
Just one will do.
Because in all fairness, I’m not aware of ONE model that correctly predicted what happened between 2010 and 2022.
If the models are all wrong… then the science is all wrong… period.
End of Story
Some of them at least think comparing models to reality is a little pointless.
“People underestimate the power of models. Observational evidence is not very useful,” – John Mitchell, UK MET
“Observational evidence is not very useful,”
What a quote!
Absolute admission that the Scientific Method of observation & measurement, also known as Empiricism, doesn’t have much of a place in Mr. Mitchell’s thinking.
No scientist, he…
Anybody stretching the truth (CliSciFi) to service Leftist governments, NGOs and crony capitalists is “no scientist.”
Does he think models provide evidence?
Just look at how many of them refer to the output of models as “data”.
They cannot do that because there was no climate between 2010 and 2022. Just weather. Did they predict the weather correctly? Sadly no. But it’s really quite easy, just take the extremes and average them out.
Marxist propaganda 101. Accuse the opponent of what you’re doing. Once again it’s not about climate, it’s about ideology and supporting the narrative.
Like two weeks to flatten the curve 🤓
My impression is that all climate models begin with the assumption that warming will occur. Warming is baked into the programming. With the global warming “pauses” we’ve seen over the past couple of decades, this assumption is erroneous. I think the modelers aren’t trying to model the real climate, they’re trying to model what they expect.
That’s why one reads of CliSciFi modelers openly adjusting parameters during the models’ tuning periods to “get an ECS that seems right.” Then people like Gavin Schmidt lie in saying that ECS is simply an emergent phenomenon of the models, not programmed in as skeptics accuse.
The climate models should be sorted out and getting things mostly right about the same time as Jimmy Hoffa’s body turns up.
Be fair, there’s a real chance one day they will find Jimmy Hoffa’s body… 🙂
There’s a real chance I could win a $500 million Powerball lottery. Better odds than finding Hoffa, I’d say.
😉
Only if you believe that there is a GHE having som impact on Earth’s energy balance.
The energy balance on Earth is controlled by two temperature regulating processes that set upper limit of ocean surface to 30C and lower limit to -1.8C. Both are due to ice formation; ice formation on the water surface and in the atmosphere over tropical oceans.
Whatever you think the GHE, it is not doing anything to Earth’s energy balance.
One trick pony crank.
“ by having so many different models, built by different groups of experts, we can be more certain of the results they produce.”
And if all the models are biased in one direction, their average will be biased in that direction. Duh! Garbage is garbage.
CH: Agreed. This one caught my eye.
“All the models show the same thing: put greenhouses gases into the atmosphere and the world warms up.”
Well, duh. When all the models are built around the premise that additional GHG’s cause warming it would be startling if any model didn’t show warming. Such a model would never see the light of day. The idea was to try and predict how much warming by when and they’ve failed in that for 4 decades. He also thinks that predicting that summers will be warm and winters cold in the future somehow supports the validity of climate models. What nonsense.
Christopher (hereditary Lord abolished position by Blair Labour UK) Monckton was of course banned from addressing IPCC meetings and conferences after he audited climate hoax computer modelling figures and exposed the errors and omissions. He’s not a scientist they said, well he never claimed he was, he is a mathematician.
The science is settled they claimed, but science is never settled and auditing rubbery figures is not science.
Gavin Schmidt is also a mathematician yet that does not prevent him from calling himself a climate scientist, NASA’s chief climate scientist.
The “he’s not a scientist” argument only applies to people on the wrong team.
Modeling is easy. Getting it right is not
If they got it right all the time it wouldn’t be a model 🙂
UCL Professor is utterly clueless. In Maslin’s derangement, he might as well use Ouija boards as ‘easy’ programs to model Earth’s chaotic climate systems decades into the future.
Which goes a long way towards explaining why no climate model has accurately forecasted climate.
I programmed models, financial analysis, workhour/productivity analysis and official reporting systems for a couple of decades. Except for one of the reporting systems, all were turned over to operations after the data engineer was satisfied with the test deck, outputs and operation.
I never saw any program that was well designed, coded and implemented by any scientist of any stripe unless they were trained by expert programmers at a serious college/educational facility.
I cringe when professors, mathematicians, scientists, physicists, etc. talk about programming. They are always the worst offenders when us peons want designs, project status, code tests, verified output, thorough test decks, security verification, operating parameters, database designs, compliance with official data governance, privacy and security policies, etc.
Elites, like professors, tend to take common sense questions along these lines as insulting and intrusive.
Nor have I seen many well designed, coded, implemented programs by so-called programmers. Their spaghetti code programs were frequently as bad as the self taught scientists.
The first problem for many pseudo programmers is that they poorly know only one language, and they force that language to serve every purpose.
The second problem is that start coding and continue until they’ve stuffed every known need into that same linear program. When problems occur, they don’t fix the design or coding, instead they add a fix.
Often a fix on a fix on a fix, etc. etc., ad nauseum.
The majority of these errant programs are programmed without a design, proper project plan, a complete set of test data, calculations and outputs, etc. etc.
All of their programming is conducted ad hoc. When a program actually runs, it is a surprise rather than expected.
The most typical result is a big plate of spaghetti.
I taught myself how to program in “C” so I could do the calculations necessary in my forest management work. So I bought several of those 1,000 page books on the language. I read everthing in all those books- entered all the code in my PC. Studied all the methodologies as you noted. Tested every line of code, over and over. Spent years working on this. I really love the C language. If I hadn’t gone “hippy” in the ’60s and majored in forestry, I might have become a mathematician or computer scientist. I loved creating data structures and the use of pointers, which took me a long time to grasp. I found that doing this programming got me high because it was soooo difficult I had to focus more intensely than any other time in my life. I got in the habit of commenting on the code- line by line- put the comments in the code so I could understand it later. It was a wonderful experience. Then years later I found that I could recreate the program(s) in Excel and that took almost no time at all. But working in C was a terrific mind exercise. I wanted to move on to C++ and object programming but then I moved on to other interests. I do have great respect for the best programmers. They just as smart as anyone in the truly hard sciences of physics and chemistry, IMHO.
After all that, I have little confidence in climate models- unless I see the code and data- simple as that.
There is, and has only ever been, ONE book on ‘C’ 🙂 Well, 2 if you include the 2nd Edition.
All those 1,000 page books are either general programming books using ‘C’ as the language of choice, software engineering books using ‘C’, or very extensive study guides.
‘C’ is a wonderful language, extremely powerful but extremely unforgiving.
As you discovered, the higher-level scripting languages are extremely useful, and, in practice, are more widely used. The in-built data structures in something like Perl or, more recently, Python can allow very productive coding. The higher level of abstraction does insulate one from what is really happening, so the time spent with C or Pascal is time well spent.
Yes, there is far more to software development and software engineering than just cutting code.
Mathematicians and physicists appear to treat programming languages as just a different form of mathematical notation, and trust the outputs.
Without the underlying theoretical foundations provided by a Computer Science degree, it is very easy to fall into the traps of rounding errors, lack of input or output validation, bounds checking, hard-coded inputs, or my pet peeve of complete lack of source control.
Then, of course, we get into the next higher level of project management and quality control you so nicely summarised.
And that’s without getting into system administration, system engineering or deployment management which are so critical operationally.
Sent to Mark Maslin under the subject heading, “saving humanity.”
Here you go, Mark,
“Propagation of Error and the Reliability of Global Air Temperature Projections”
https://www.frontiersin.org/articles/10.3389/feart.2019.00223/full
Climate models have no predictive value.
Regarding CO2 emissions and the climate, the IPCC don’t know what they’re talking about, and neither do you.
Your program is one of universal immiseration and early death.
Patrick Frank, Ph.D.
+++++++++++++++++++++++++++++++
These things are, we conjecture, like the truth;
But as for certain truth, no one has known it.
Xenophanes, 570-500 BCE
+++++++++++++++++++++++++++++++
Not that it’ll do any good.
Good luck Pat. Such emails are always worth sending,
If I was a thug- I’d force Mark Maslin to read all the comments here. :-}
Remind Maslin that Xenophon is the mastermind of Alexander’s conquest of the Persian Empire (Anabasis, or the Persian Expedition, and his Cyropaedia, or Education of Cyrus.)
These Gaia-like imperial narratives, then Persian Mithra, have yet again got a grip – and the modern ‘climate’ empire is flailing.
Hope-ium springs eternal.
Oh, so it’s _deniers_ that mistake weather for climate? And here I was thinking that it was our awful MSM who keep attributing every weather event to climate change.
“These models are continually tested against historic and palaeoclimate data (this refers to climate data from well before direct measurements, like the last ice age), as well as individual climate events such as large volcanic eruptions to make sure they reconstruct the climate, which they do extremely well.”
Really? As best I know none of these models have been used to “back caste” earlier than 1900 and even then they don’t do so well but certainly I’ve never seen any model output that reproduces the little ice age, medieval warming period etc. If someone knows differently I would love to hear about it.
That’s “test” according to climatology. More or less it means “tune the model until it looks about right.”
I’ve seen the results of these back casts. And describing them as matching the data really well is unjustifiably generous.
If you get to pick your metric, location and time period, you can say anything you want about CliSciFi model accuracy.
They cannot hindcast beyond instrument records because the models cannot account for the massive swings in temperatures in the reconstructions. That is, the models are predicated on a magical stasis that is perturbed by feedback from increased atmospheric CO2.
Since CliSciFi model outputs show a direct linear relationship to CO2 concentrations, great mathematical gymnastics must be performed to even get close to the direction of multidecadal temperature variations of the past. Lysenkoism.
You don’t really have to worry about back-casting against palaeoclimate data. We know that in the recent past we had the MWP and the LIA, significant natural variations even in the average temperature.
Go out past 5 years on *any* of the climate models and all the natural variation disappears. If climate forecasting is “easy” because it uses averages then at least some natural variation in the average temp should be seen, but it isn’t. The projections just become y = mx + b. If the climate models are actually accurate out 100 years from the present then they should also be accurate for the next 200 years, 500 years, or even the next 1000 years. Ever growing temps till the Earth becomes Arrakis with nothing living on it except spice worms.
Happer has shown that there should be some kind of logarithmic response from CO2 increases. Have you seen a climate model prediction that even pretends to show a logarithmic curve for future temps? I haven’t. In their arrogance the AGW proponents can’t even accept that their y = mx + b projections might need some further “tuning”!
Dr. Happer has shown that doubling current CO2 levels will have no measurable effect, that could take 100 years. Optimum is probably 4 or more times for plants.
Of course it’s easier. They just fabricate numbers and project a date past their retirement age.
“Just as we can say with absolute certainty that putting greenhouses gases in the atmosphere warms the planet”
This is ridiculous because you cannot demonstrate that removing these gases cools the planet.