From the COLUMBIA UNIVERSITY SCHOOL OF ENGINEERING AND APPLIED SCIENCE and the “learn garbage in, get garbage out” department.
Machine learning may be a game-changer for climate prediction
New York, NY–June 19, 2018–A major challenge in current climate prediction models is how to accurately represent clouds and their atmospheric heating and moistening. This challenge is behind the wide spread in climate prediction. Yet accurate predictions of global warming in response to increased greenhouse gas concentrations are essential for policy-makers (e.g. the Paris climate agreement).
In a paper recently published online in Geophysical Research Letters (May 23), researchers led by Pierre Gentine, associate professor of earth and environmental engineering at Columbia Engineering, demonstrate that machine learning techniques can be used to tackle this issue and better represent clouds in coarse resolution (~100km) climate models, with the potential to narrow the range of prediction.
“This could be a real game-changer for climate prediction,” says Gentine, lead author of the paper, and a member of the Earth Institute and the Data Science Institute. “We have large uncertainties in our prediction of the response of the Earth’s climate to rising greenhouse gas concentrations. The primary reason is the representation of clouds and how they respond to a change in those gases. Our study shows that machine-learning techniques help us better represent clouds and thus better predict global and regional climate’s response to rising greenhouse gas concentrations.”
The researchers used an idealized setup (an aquaplanet, or a planet with continents) as a proof of concept for their novel approach to convective parameterization based on machine learning. They trained a deep neural network to learn from a simulation that explicitly represents clouds. The machine-learning representation of clouds, which they named the Cloud Brain (CBRAIN), could skillfully predict many of the cloud heating, moistening, and radiative features that are essential to climate simulation.
Gentine notes, “Our approach may open up a new possibility for a future of model representation in climate models, which are data driven and are built ‘top-down,’ that is, by learning the salient features of the processes we are trying to represent.”
The researchers also note that, because global temperature sensitivity to CO2 is strongly linked to cloud representation, CBRAIN may also improve estimates of future temperature. They have tested this in fully coupled climate models and have demonstrated very promising results, showing that this could be used to predict greenhouse gas response.
###
About the Study
The study is titled “Could Machine Learning Break the Convection Parameterization Deadlock?”
https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018GL078202
Authors are: P. Gentine1 , M. Pritchard2 , S. Rasp3 , G. Reinaudi1, and G. Yacalis2 (1Earth and Environmental Engineering, Columbia University, New York, NY, USA, 2Earth System Science, University of California, Irvine, CA, USA, 3Faculty of Physics, LMU Munich, Munich, Germany).
You beat me to it GIGO.
June 19, 2018 11:32 pm
And here is the “GI”, to wit:
And here is the “GO”, to wit:
I can believe the above, ….. the Learning Disabled thinking they can teach an inanimate object how to learn.
Me thinks their top priority should be …….. teaching themselves how to learn.
They feed the machine with adjusted data. They are fooling both themselves and the computer.
… and SkyNet became conscious, assembling a mass of clouds, encircling the Earth in one, unified, global storm that wiped out all human life as we know it.
Oops !
Machine learning can be a useful tool. But only if the machine is trained on good data, not the cooked data endemic in climate “science”.
The models will never be better than they are now without computing power greater than all the computers ever built put together. Cell sizes will need to be small enough to detect heat shimmer off a car hood and be able to follow the turbulence trail all the way up the air column
We can’t model rainfall. We can’t model regional temperature. We can’t model the hydrocycle. We can’t model glacier behavior. We can’t model wind. We can’t model sea currents. We can’t model ocean cycles. We can’t model turbulence. I know I missed some. We try to model all of those, but in order for the model to actually be correct 100 years out, it will need to have ALL of these correct in every step. Any of them being off or “paramaterized” means that the errors induced by miscalculation or assumption will compound into the future. The idea that you can remove them through averaging is the most ignorant thing I have ever heard someone claiming to be a scientist ever say (in their field – I have heard lots of ignorant comments from scientists talking outside their field.)
Even if you do somehow manage to get all those things accurate in your models, your final results will only be accurate if you do ALL of your calculations using infinite precision.
Here in the real world, computers have limited precision for their floating point calculations. Which means that each iteration adds a small error, just because of rounding. With each iteration, this error accumulates. By the time you’ve done enough iterations to get 100 years out, the accumulated errors are larger than the prediction you are making.
not just infinite precision but also foreknowledge of every random chaotic input and perturbation ad infinatum.
Cheers!
Joe
They only need to be accurate down to the Planck scale. That should be sufficient.
Closest I can get you is an “infinite precision” T-shirt. What’s your size?
Dr. Pat Frank said he never met a “climate scientist” who knew the difference between precision and accuracy. All the climate scientists think the spread that is shown in the graphs represents the accuracy error spread +/-. What that spread actually shows is the floating point error (precision error) and NOT the actual accuracy error of the variable. So for example in a graph that shows a projected temperature 100 years in the future, it shows the temperature spread in degrees. The climate scientists think that the + and – degrees in the spread is the error of their model. IT IS NOT. It is only the precision error and has no correlation to an actual temperature spread. The actual accuracy error is built into the model code and is unknowable for each variable unless you can somehow measure the variable against real life observations. There was a study done that concluded that the cloud error was in the neighborhood of +/- 4 watts/m^2 per year . Even that was subject to a massive error figure and probably represents the low end of the actual error , because how can you measure this error accurately if you cant model clouds less than 1.5km in size ? but I digress. The bottom line is at every point in calculations involving forcing of clouds, the built in error of clouds alone, produces an accuracy error (NOT A PRECISION ERROR) that is so huge after 100 years that you cant even graph it because you would have to change the x temperature scale so much as to be meaningless.
That is only true if we restrict ourselves to models that are constructed as solutions to (partial)(non linear) differential equations.
It is just possible that neural net modeling might come up with solutions that work, even though no one knows why they work.
Damn, you beat me to it.
Yes I’ve been looking at machine learning recently. It strikes me that it s only useful in a variety of constrained applications. A generic learning without explicit training on constrained inputs is not around; a probably never will be.
As an after-thought, why create a machine that takes 25 years to learn, and still makes errors, just ke people.
“They have tested this in fully coupled climate models and have demonstrated very promising results, showing that this could be used to predict greenhouse gas response.”
The only test is validation against real-world data. But not a single climate model has been fully (or even partly) validated. Hence the GIGO.
The only test is validation against real-world data.
Ah err. No.
There are many times when you cannot practically test the model against real world data
because it will never exist. Example, many disaster simulations: earth quake, tsunami, asteriod impact, super volcano eruption, .Carrington event You build a model that estimates to the best of your ability the danger/damage of the event. If you are lucky you get some real data from a disaster that comes close to the one you are modelling to calibrate your model.
The approach they use is pretty standard when it comes to simulating complex systems when you cannot run controlled real world tests.
You build a model of the highest fidelity, and you use paramterizations in lower fidelity models to emulate the higher fidelity models. Done all the time in aircraft simulation,
war simulations, etc Standard practice.
Thanks, Mosh! The best you can do is estimate (= guess) and if you’re lucky you might have got your model right.
I have the same six lottery numbers every week … see where this going?
We don’t know; we have never known; we cannot ever know.
“The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible”. Sound familiar?
Now can we please stop wasting trillions of dollars of other people’s money and get on with our lives without being lied to by pseudo-scientists and environmental activists with an anti-civilisation agenda?
@Steven Mosher,
So, you’re saying models are validated by comparing one model against another? This is a perfect example of what’s wrong with climate science.
I really hope that models used to design aircraft are not validated by comparing models against other models. I hope – and assume – aircraft design models are validated only by rigorous physical tests.
No, as Philip says, “The only test is validation against real-world data”.
We’ve had super computer models for at least 30 years, and so we can test them. And they fail miserably. They all predict much more warming than actually occurred.
Irrespective of what “advances” are made in climate modelling, the only change will be to increase the predicted warming. That’s because it would be politically impossible for the IPCC to significantly reduce their predictions of warming. Meanwhile, the peer-reviewed science shows far lower warming sensitivity than that used by the IPCC.
Read my lips. No one can predict the climate in 50 or 100 years time. This admission by the IPCC can’t be repeated too many times:
“The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible”.
Chris
You are dead wrong about aircraft models.
A good friend of mine worked for the company which produced the Apollo capsule parachutes. His job was to collect the data from wind tunnel experiments so they could accurately model the chute deployment characteristics. It was real time data capture done in Fortran (a first at the time). They used the data to develop chute design models, the first actual tests were quite successful. He eventually went on to write one of the first LISP engines.
and ultimately tested by full size test flights so ultimately it IS validated against real world data.
In the marine game we do the same. We might start with CFD and other analysis to model drag and wave generation etc as well as motions but eventually the vessel undergoes trials and th trials (as weel as scale model tests) are used to rigourously calibrate any modelling/analysis/whatever you want to cal it.
Yeah ! CARS are like that !
They release them so the public can test them and then when they
find the faults they recall them and change the engine parts , the
safety air-bags , the fuel tanks that are dangerously located etc.
Even the emission controls need tweaking on some models !
But eventually it depends on
HOW they perform IN THE REAL WORLD !
( and the way that science and politics are going
…..the REAL WORLD is getting stranger and stranger !! )
And yet test lying of new aircraft is done and finds issues despite the ‘ability of models ‘
By the way you cannot qualify on simulator alone , you still need to fly the real thing to get that despite ‘ the ability of models ‘
So in both manufacturing and learning simulators and models are NEVER good enough on their own .
Never heard of a wind tunnel, even a supersonic one? Both Audi and Mercedes dismissed testing, came out with the TT and Smart – both were pile-ups; testing back in vogue.
Asteroids – never heard of the Livermore Gas Gun? Hot Plasma is their game.
Svensmark did a real aerosol experiment with a CME, where CERN “modeled” aerosol growth when they had the best proton source available.
Aside from that “models” must be able to show paradoxes – like black holes shown both by Einstein and later Hawking not to exist. After all is was a thought experiment that produced Relativity.
AI is not capable of thought experiments – Gödel’s Incompleteness destroyed Bertrand Russell’s masterplan.
I never thought I’d see it, but there it is. Mosh gives up the goods on climate modeling. His comment should be locked before he edits it.
“‘The only test is validation against real-world data.’
Ah err. No.” (Single quotes added)
Complete scientific hubris. Mosh thinks climate models don’t need verifying. That’s not science.
Well Mosh thanks for letting everyone know that the climate models are not modeling the earth. Instead they are modeling other models. This is clearly the case because models are always compared against what they are modeling. It’s what a model is: “a system or thing used as an example to follow or imitate”. In order to imitate something, a model has to be compared to that something. So if the climate models are not compared against the actual climate, they are not modeling it. It’s that simple.
Utter nonsense.
Otherwise, we’d have eliminated pilots and would strictly fly aircraft by computer intelligence on dangerous missions.
The flying simulation analogy ignores the fact that flying simulations are not “machine learning” programs. Instead they are programs developed over decades addressing simulation errors and human interface problems.
Nor do “war simulations” match or support battle or war situations.
Infinite possibilities.
Infinite situations.
Infinite responses.
Near infinite inputs, depending upon incoming data granularity.
In other words, you can never be certain that your model is correct, but that’s still good enough to demand the entire world’s economy be scrapped.
Steven,
You said this is done in aircraft, war simulations, etc. I have a unique perspective to add. I was a member of the USAF, and during my service I worked in Intelligence. Specifically, I created many MATLAB and other models for aircraft radar and related systems. These models were used to validate the effectiveness of US weapon systems. All of our models were validated by physical tests of the systems we were simulating, whenever possible, and that process was quite cyclical in that we’d adjust the model, run the system, then re adjust the model. It takes a long time to get it right, and again we had the physical thing we were modeling. The irony here is that, until I started working there, doing that work, I was a believer of AGW. I read Hansen’s presentation to congress as a teenager and Greenpeace supporter; I sent them money I had earned from the jobs I had. The thing is, though, I learned that modeling is HARD. Models that are not tuned to reality, in this case the data from the hardware, are bad. War simulations are known to be mostly garbage, but the usefulness is in helping the commanders come up with alternative ideas when the battle occurs; where it’s a small unit commander or a theater commander. The fog of war, and the old von Clausewitz adage apply. This is why we spend all that money doing physical war games. Its hard for war simulations to simulate tired solders, maintenance issues, training deficiencies, and so on. While it is true, as you say that modeling is standard practice, it is also known that to trust the modeling is both dangerous and foolish. Trusting it alone gets people killed. I can give you far too many examples of this in a military context. The value in models is in giving leaders ideas and options, no more, no less. There is value in that, though.
I will add one, example. There was a fighter combat model that was flyable, a true simulation. The model, however, did not completely accurately model the radar of a particular aircraft. In reality, the radar in question would lose radar lock if the aircraft made a particular physical maneuver that instructor pilots knew to employ, and were actively teaching. The problem is the model of said radar was not accurate, and when the instructor pilots employed the maneuver, the simulated radar never lost lock. This was later fixed, but the point is made.
Steven you have just shown you know absolutely nothing about machine learning, you are a layman confused between old school modelling and AI so please refrain from further comment. Machine learning uses linear regression which feeds into logical regression in a series of back progations and if you do nothing else try to understand that statement.
The backpropagation adjusts the weights of a single neural network or multiple overlayed neural networks in order to minimize the error between the model’s and desired result. Your error result will have bayes errors, bias errors, training errors, variance errors, data mismatch errors, overfitting errors and test data set errors and then you need to work out what to adjust.
There simply is no way to play with fidelity of machine learning you never really have any idea what the neural network is actually doing. Machine learning has no ability to be imaginative, project, scale, extrapolate from what it has learned.
So your statement above is factually wrong you can not use that approach with machine learning that is the old school parametized modelling approach which has nothing to do with this article.
The TL;DR version of the above is.
Machine learned models are fits. The is no physics involved, whatsoever.
To get a different result, train it on different data.
No doubt they’ll think they can run the model for a while and retrain it on the “new climate state” that results. It’ll be a total fantasy.
I see. If you can’t test your model with real-world data, just estimate it to the best of your ability using the same assumptions and biases you had when you programmed your model. What could go wrong? And if you drop your car keys while walking in the dark, just head over to the nearest street light and look for them there. You know you won’t find them in the dark, so go look for them where the light is better. /Sarc
It would be good if these guys had to bet their salary on the predictions being roughly correct.
Double or nothing.
How much more computing power is required to have the AI assessing each cloud in the simulation…. and then feed that information back into the overall climate model for N iterations more? The ‘simplified’ parameterized climate models already challenge the computing capacity of the most powerful mainframes.
Should work, but only if we change the projectionists who loads the film?
“A major challenge in current climate prediction models is how to accurately represent clouds and their atmospheric heating and moistening”
Well duuuhh! So if this is a matter still in play, i.e. they still have not come up with accurate models of “clouds and their atmospheric heating and moistening”, on what basis all the alarmism?
If I’ve got this right, the modellers admit quite openly they can’t model clouds and humidity, one of the most important factors in climate. Yet they want us to base policy on their work?
They’ve also acknowledged that that cannot model the influence of (very difficult to predict) volcanic activity. But, hey, other that things they cannot model, the models are pretty good. (Snark)
And Mosh, just what data do you calibrate the machine learned model against?
The kiddy fiddled, pasteurized, homogenised, aggregated, sausage ‘meat’ of local thermometer data (with added truthiness) or satellite and balloon instrument generated data as part of a built and fit for purpose global system but which is only a few decades long?
Or
Do you hockey schtick it into shape taking expert advice from Michael Mann and co, with or without a Nature Trick or two??
did they honestly say “global temperature sensitivity to CO2 is strongly linked to cloud representation”? i.e. Global temperature sensitivity to CO2 is strongly linked to atmospheric H2O? Did I understand that correctly?
also: “We here present a novel approach to convective parameterization based on machine learning, using an aquaplanet with prescribed sea surface temperatures as a proof of concept.” – An aquaplanet? so they didn’t even try to model the earth.
so, they used a SuperParameterized Community Atmosphere Model (SPCAM3) to generate two years of data, and they find a neural network trained on one year of that data (in fact they say 3 months data was enough) produced predictive results for the second year that largely agreed with the model. “Overall, the NN predictions agree remarkably well with the SP-CAM truth” Well, no $#!t. Oh except where it wasn’t as good. :-
“In these lower levels, the predictions here have significantly less variability in terms of its mean squared error loss function, which encourages the ANN to predict just an average value in cases where it is not certain.”
I’m not an expert on Machine Learning, far from it, and its such a common buzzword everywhere these days I’m not surprised they are trying this . But unless you have vast amounts of actual, unmodified data, surely there is no point in starting?
as everyone else has stated. Garbage in, Garbage out.
I would like to hear Pat Frank’s take on all of this about machine learning of climate science. I bet that you can find a child with a perfect memory and great intelligence (people like that do exist) and then you have a committee teach that child over 15 years on every principle about climate change that is known. However if you then insisted that they incorporate the idea that CO2 causes temperature forcing on any significant scale, I predict that this child now grownup would turn skeptic immediately and say Are you nuts? And then on 2nd thought maybe NOT. The reason is that there seems to be 1000’s of groupthinkers that think that CO2 does create significant forcing of temperature increases.
they should rename it: S4BRAINS
James Hansen was one of the first computer climate modellers that in 1988 predicted warming scenarios. Because he actually published 2 papers in 1981 on CO2 forcing and went to Congress twice to testify in 1987 and again in 1988 in favour of global warning you may accurately say that he James Hansen is the father of computer climate modelling .
James Hansen is truly deranged. Completely unstuck mentally. A nut case, devoid of any common sense or rational thought. To think he was the director of GISS for 32 years and was the initiator of global warming in the US where he preached before Congress twice. It boggles the mind. He was arrested 5 times for protesting illegally for green causes. Some of his predictions and some statements in his own words, and hallmarks of his life are as follows:
1) In 1988 he predicted that the Hudson River would overflow because of rising sea level caused by CO2 and New York would be underwater by 2008.
2) In 1986 he predicted that the earth would be 1.1C higher within 20 years and then by
3) 1999 he said that the earth had cooled and that the US hadn’t warmed in 50 years
4 He had also said that the Arctic would lose all of its ice by 2000.
5) In december 2005, Hansen argued that the earth will become “a different planet” without U.S. leadership in cutting global greenhouse gas emissions.
6) He then reversed course again and said in march 2016 that the seas could rise several metres in 50 to 150 years and swamp coastal cities .
7) He also said that global warming of 2C above preindustrial times (~ 1850) would be dangerous and that mankind would be unable to adapt.
8) in 2009 Hansen called coal companies criminal enterprises and said that Obama had 4 years left to save the planet.
9) In 2012 Hansen accused skeptics of crimes against humanity and nature.
10) Hansen is involved with a 2015 lawsuit involving 21 kids that argues that their constitutional rights were interfered with by CO2
11) in 2017 he has admitted that CAGW does not happen with burning fossil fuels.
“One flaw in my book Storms of My Grandchildren is my inference you can get runaway climate change on a relatively short timescale. ”
“Do you think that’s possible on a many-millions-of-years timescale?
It can’t be done with fossil-fuel burning.”
12) Then he said “But if you’re really talking about four or five degrees, that means the tropics and the subtropics are going to be practically uninhabitable.”
He doesnt seem to know that their average temperature is 28C.
13) But then he said that climate change was running a $535 trillion debt
14) He has been quoted many times that equates climate change to all sorts of extreme weather events. No database in the world shows any more than there ever were.
15) Hansen has published way over 100 fraudulent climate studies with almost all of them using results from computer climate models that are woefully inadequate and that have never been validated except by the human modeler.
Obviously the man just doesnt know when to shut up.
His model was correct.
https://www.carbonbrief.org/analysis-how-well-have-climate-models-projected-global-warming
Was hansen one of the first? Hmm. first model was 1930s. It also was correct in predicting a rise in temperature from increased c02
The only temperature data set that skeptics trust is the UAH satellite temps from 1979. The global warming wars will be fought over this data set. The 1st volley was shot in 2017 when some alarmist sympathsizer fired 7 military style bullets into John Christy’s office, the very one that produces the UAH dataset. Since Carbon Brief compared all results to the fraudulent datasets put out by NASA NOAA, NCAR and the MET office, their study is fraudulent as well. And indeed Carbon Brief did not even include UAH.
Since CO2 warming was not even close to being accpted by the scientific community in 1930 or indeed up to 1980, you cant really count any of those studies as being 1st. Dont forget that global cooling was the consensus from 1969 to 1979.
The real question is did James Hansen know that Richard Feynman was dying of cancer when Hansen 1st testified before Congress in the fall of 1987? IF SO THAT EXPLAINS EVERYTHING. If not then James Hansen is a braver man than I have ever given him credit for. If he didnt know, then Hansen was risking his career by doing what he did. The reason is simple. Feynman would have demolished CO2 Causes significant Warming in a 2 page article explaining why this is impossible. Hansen didnt get a good reception in 1987 so he bided his time and tried again in June 1988.
By that time he knew that Feynman was dead.
This time he had some insiders at congress open up all the windows the night before and had the air conditioners turned off . He had also asked the weather office earlier because he was allowed to pick what day to testify by the congressmen. So he asked what day would be the hottest. For once the weather office prediction was correct and it was the hottest day of the year the next day when he testified. Obviously the night before was also very hot and thus having all the windows open would make the indoor temperature nearly the same as the outdoor temp. With all the tv cameras on, Hansen was sweltering along with all the legislators and his talk about global warming having already started; fooled everyone in the room. The global reach of global warming had truly started. The IPCC got started later that year and the train ride to hell got going.
Did Hansen really believe that it was true? I believe that he did but in 1999 he lost some faith See No 3 above . However that year 1999 saw a huge Arctic melting again
and he regained his religion never to lose faith again except for the following. However the little logic left in him by 2017 has caused him to lose faith in CAGW caused by fossil fuels. See No.11
So one wonders just what evils CO2 does if it doesnt cause CAGW? I guess Hansen believes it is a long slow train ride to a burning HELL if you don’t get drowned 1st because of the rising seas caused by ice melt that could never melt enough before you would have perished from the heat anyway. So which is it Hansen? Fire or Brimstone ? (OK ,brimstone laden water ) Since brimstone is sulphur maybe the ice carries with it sulphur when it melts and travels to the seas but I digress. Pick your poison sinners.
Death by fire or death by drowning. Oh maybe Ill pick drowning cause I can build an arc. Oh so that is what this is all about. A recreation of Noah’s flood. Except the earth has too many species to fit into any arc that mankind could build. Okay we are back to death by fire for more than 1 reason. 1st they told me there was a Santa Claus Next they told me there was a God. Now they are telling me that Mr CO2 is a bad guy that should be locked up forever in that fossil fuel in the ground and never see the light of day. Fire and brimstone . Where have I heard that before ?Hmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm.
The more I study Richard Feynman’s life the more I realize just what a Giant he was, from Wiki
“known for his work in the path integral formulation of quantum mechanics, the theory of quantum electrodynamics, and the physics of the superfluidity of supercooled liquid helium, as well as in particle physics for which he proposed the parton model. For his contributions to the development of quantum electrodynamics, Feynman, jointly with Julian Schwinger and Shin’ichirō Tomonaga, received the Nobel Prize in Physics in 1965.”
“In a 1999 poll of 130 leading physicists worldwide by the British journal Physics World he was ranked as one of the ten greatest physicists of all time.”
“He assisted in the development of the atomic bomb during World War II”
“Along with his work in theoretical physics, Feynman has been credited with pioneering the field of quantum computing and introducing the concept of nanotechnology”
“Attendees at Feynman’s first seminar, which was on the classical version of the Wheeler-Feynman absorber theory, included Albert Einstein, Wolfgang Pauli, and John von Neumann. Pauli made the prescient comment that the theory would be extremely difficult to quantize, and Einstein said that one might try to apply this method to gravity in general relativity,[36] which Sir Fred Hoyle and Jayant Narlikar did much later as the Hoyle–Narlikar theory of gravity.[37][38] Feynman received a Ph.D. from Princeton in 1942; his thesis advisor was John Archibald Wheeler.[39] His doctoral thesis was titled “The Principle of Least Action in Quantum Mechanics”.[40] Feynman had applied the principle of stationary action to problems of quantum mechanics, inspired by a desire to quantize the Wheeler–Feynman absorber theory of electrodynamics, and laid the groundwork for the path integral formulation and Feynman diagrams.[41] A key insight was that positrons behaved like electrons moving backwards in time.[41] James Gleick wrote:
At twenty-three … there may now have been no physicist on earth who could match his exuberant command over the native materials of theoretical science. It was not just a facility at mathematics (though it had become clear … that the mathematical machinery emerging in the Wheeler–Feynman collaboration was beyond Wheeler’s own ability). Feynman seemed to possess a frightening ease with the substance behind the equations, like Einstein at the same age, like the Soviet physicist Lev Landau—but few others.”
I ASK YOU. IS THERE ANYTHING IN PHYSICS THAT THIS MAN DIDNT KNOW? He knew about light and IR . If he would have had his attention pointed to global warming , he would have demolished it before breakfast.
I am again crying as I write this. His most famous quote among hundreds is the following:
In 1974, Feynman delivered the Caltech commencement address on the topic of cargo cult science, which has the semblance of science, but is only pseudoscience due to a lack of “a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty” on the part of the scientist. He instructed the graduating class that “The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.”
Another one that I like is:
“If you can’t explain it to a six year old, you don’t really understand it”
If you haven’t yet, read “Genius” by Gleick.
It can be tough to slog through, but it is an amazing book about an amazing man.
‘Of the three, scenario B was closest to actual radiative forcing, though still about 10% too high. Hansen et al also used a model with a climate sensitivity of 4.2C per doubling CO2 – on the high end of most modern climate models. Due to the combination of these factors, scenario B projected a rate of warming between 1970 and 2016 that was approximately 30% higher than what has been observed.
If this model were ‘correct’ I would like to see one that was not.
As is said, a miss is as good as a mile.
Machine learning is not a novel technology in Australia.
We need a clean feed of data.
https://jennifermarohasy.com/
‘THE next really big breakthrough in environmental management could come from better forecasting of droughts and floods. In particular, using machine learning to mine historical climate data, build models based on clever algorithms, and use these to forecast rainfall.
If you would like to understand this technique that John Abbot and I have developed, consider downloading our most recent book chapter. It can be downloaded as a PDF from this link, from the ClimateLab.com.au.
As a consequence of my involvement in this work, and searching for the best long historical temperature series, I stumbled across deficiencies in how the Australian Bureau of Meteorology archives its temperature data – and how it remodels temperature series to make them more consistent with human-caused global warming theory.
Over the last year I’ve come to realize that the problem extends far beyond remodeling raw data. There are also issues with the calibration of the electronic probes that have been used to measure temperatures since November 1996, this casts doubt over the integrity of the actual raw data. ‘
From memory she and Dr Abbot have calculated the sensitivity of the atmosphere to a doubling of CO2 to yield a rise of .6 C.
It seems more reliable than that of Dr James Hansen.
The download contains information on original Australian research on predicting rainfall
‘Entitled, ‘Forecasting of Medium-term Rainfall Using Artificial Neural Networks: Case Studies from Eastern Australia’, chapter 3 explains how and why rainfall forecasting is amenable to machine learning. We also explain how we developed and improved our forecasting technique from 2012 through to 2016.’
“His model was correct.”…well thank God for the El Nino
so correct, Hansen even caught the pause……snark
Steven, From your “His model was correct” post about a model in 1930 demonstrates that you really don’t comprehend what is happening. There were no computers to run complex models in the 1930s. The models being discuss here and about climate prediction in general all required fast super computers to operate. Climate modeling today couldn’t exist without so called super computers.
Back in the 1980s I had a paper whose author was trying to develop a model for a ecosystem out west; if I remember correctly a river basin. He and his graduate students first tried to determine how many different parameters and differential equations would be necessary to adequately model the ecosystem in question. I don’t remember the number they came up with but at the time there was no super computer that could “efficiently” handle their model.
It has only been recently that water vapor and clouds were attempted by modelers. Why because of the complexity of water in the system. I debated a climate modeler in public hearings in the early 2000s. After some pushing and prodding, and the chairman demanding the modeler answer my question he admitted that understanding water vapor, especially clouds and the oceans were vital to ever properly modeling climate. He then discussed the complexity of water vapor in the system. He also admitted that climate being a evolving chaotic system might never be model well enough to predict future climate no matter the cause.
Mosher,
The thing about computer models is that you can get almost any result from them, from absurd to reasonable. However, even the reasonable results may only be relied upon within a limited range. Therefore, it is essential that one test the models against reality to know 1) if the model produces any reasonable results; 2) and if there are limitations on the range in which those reasonable results may be obtained. To paraphrase Dirty Harry, a programmer has to know his/her limitations.
Before one can make such a claim as yours: “His model was correct.”, there has to be agreement on the acceptable tolerance of predicted temperature, and a detailed analysis of both false-positives and false-negatives. I’m not sure that any of the models cited by Hausfather are any better than naive extrapolations of the historical temperatures, without any attribution of cause. Indeed, that might be a good first-order test of the models: Do they provide significantly better results than a linear or quadratic extrapolation of a least-squares fit of the historical temperature data?
In 1988, Hansen claimed “…global temperatures could increase by 0.54 degrees Fahrenheit per decade until the middle of the next century,…” Well, it has been thirty years and that prediction should have resulted in a 0.9 degree C increase, which is about the total increase in the last 100 years. Obviously, the prediction is about 3X the actual amount. If your financial advisor told you that you you could expect about a 9% return on an investment over thirty years, and you only got 3%, would you be happy with his advice?
The one thing that I do agree with Hausfather on is, “Comparing these models with observations can be a somewhat tricky exercise.”
The article that you provided a link to above states “Climate models can be evaluated both on their ability to hindcast past temperatures and forecast future ones.” The only thing that is important is the ability to forecast. The ability to “hindcast” what we already know is irrelevant. It may make the modeler feel good, but does not guarantee that the forecast will be reliable!
Now, the above only addresses temperature. It is my understanding that the extant models often give contradictory predictions of future regional precipitation. If one gets diametrically opposed predictions of drought versus flooding, then you don’t know which is right and the models are of no value!
Troll.
Models are never correct.
The map is not the territory
“Yet accurate predictions of global warming in response to increased greenhouse gas concentrations are essential for policy-makers (e.g. the Paris climate agreement).“
I think they are implying all the past model outputs were crap. That is, they are in over their heads and need AI to save their model junk.
Oh … and send more money.
Once again we have a press release written by a scientific illiterate and not checked by an actual scientist.
An aquaplanet is an idealized setup. It is entirely covered by water. It does NOT have continents. link
I wonder what else the writer got wrong.
We see this on such a regular basis. The rule seems to be that we must assume that press releases about science contain major errors unless proven otherwise.
“We have large uncertainties in our prediction of the response of the Earth’s climate to rising greenhouse gas concentrations. The primary reason is the representation of clouds and how they respond to a change in those gases. “
I thought these models were accurate? This author is going to get kicked out of the club for this blasphemy.
We have large uncertainties in our prediction of the response of the Earth’s climate to rising greenhouse gas concentrations. But we still claim its ‘settled science’ and attacking anyone who suggests otherwise .
‘Cloud Brain’ – Yes, methinks their brains are heavily clouded……
The phrase: “cloud cuckoo land” comes to mind.
“Cloud cuckoo land is a state of absurdly, over-optimistic fantasy or an unrealistically idealistic state where everything is perfect. Someone who is said to “live in cloud cuckoo land” is a person who thinks that things that are completely impossible might happen, rather than understanding how things really are. It also hints that the person referred to is naive, unaware of realities or deranged in holding such an optimistic belief.”
The phrase is of great antiquity:
“Aristophanes, a Greek playwright, wrote and directed a drama The Birds, first performed in 414 BC, in which Pisthetaerus, a middle-aged Athenian persuades the world’s birds to create a new city in the sky to be named Nubicuculia or Cloud Cuckoo Land (Νεφελοκοκκυγία, Nephelokokkygia) …”
https://en.wikipedia.org/wiki/Cloud_cuckoo_land
What they need to do is study birds more, and their response to climate change. Then, through convective parameterization based on machine learning they could train a deep neural network to learn from a simulation that explicitly represents birds. The machine-learning representation of birds, which they could then skillfully name the Bird Brain (BBRAIN) could then skillfully “predict” many of the things they want to show, and which are already built into the models.
Because thar’s how science works.
Bruce : I thought it was the activity of ANTS and TERMITES !
And also , Native American Indians storing firewood because it was
evidence that there was going to be a very cold Winter………………
now you tell me it’s BIRDS , migratory or non-migratory ?
I know a good story about a swallow that refused to fly South for the Winter
but then again…….so does everyone else…………and THAT STORY
concerned a large amount of BULLSHITE as well !
Lots of waffle words are necessary to push this nonsense.
Machine learning requires real world inputs on a global fll atmosphere scale in comparison to machine calculations, to allow the program to incorporate self adjusting logic. Even then, that self learning logic is written from a narrow blindered perspective of human programmers.
In a binary world, which is what all machine language boils down to, that requires massive comprehensive data as constant feedback to a program.
This must happen at all scales worldwide.
Otherwise it is just another method for climate programmers to tinker with fudge factors.
It must also be noted that machine learning programs have a problem with code bloat. i.e. a program that constantly rewrites code, adjusts parameters, writes new modules, etc. gets larger over time.
We are running into a major problem in modern society relative to the oversell, the hype of artificial intelligence and so called learning machines. Many people believe that Amazon’s talking box is really smart. I can talk to and ask questions of my TV, but my dog is smarter, he just can’t talk back or change channels. AI has its place but as discussed here what data, information, etc is used makes a huge difference, no matter how big the “cloud” used might be. Then even with the best data, the data of the highest quality is it “enough” data or even the right data. Even as fast as the modern super computer might be they still are having a difficult time emulating the human brain, even a simple brain model. Like climate the human brain is such a complex system that it is nearly impossible to model. How we learn and then use what we learn is one of the greatest challenges for those studying the human brain. My question about this so called learning model is exactly how are the obtaining and inputting cloud data. Does anyone fully understand, understand enough, how a given cloud type, at a given altitude is formed and exactly what effect it has on the surrounding atmosphere? How about two to four different cloud types over the same area at the same time?
What would really be funny is if they could actually build some “AI” that actually lived up to the hype. And after it started reinstating original measurements due to finding scientifically unjustified “adjustments,” and debunking their pet theory of human-induced climate catastrophe, we would see how fast they would pull the plug!
Programming themselves right out of a job, congratulations. Second part, do they program in their own biases so that the ai leans in one direction from the start?
Winchester : Obviously a man of HIGH CALIBRE !
No ! I think that IF they can get AI involved they will
have someone else TO BLAME when the whole thing goes A$#! UP !
There will more probably than not BE A PERIOD OF COOLING AFTER
THIS WARM PERIOD and it will be easier to divert attention to the AI !!
WHAT ? GLOBAL WARMING…….WHO ME …..!!?
NEVER……..IT WAS HIM …………AI…………YOU KNOW…….WILL SMITH !
THAT WAS I ROBOT ?? ……..well …anyway it wasn’t ME !
At least they are admitting that the models have problems and those problems aren’t going away despite 30 years of work on them.
Yes, in the link that Mosher provided, Hausfather makes the classic understatement, “Models are FAR from perfect and will continue to be improved over time.” But, how will we know if they are improving if we don’t have performance standards and accepted protocols for testing performance?
Climate “science” reminds me of the math tricks, where you are asked to think of a number, then using that as a starting point, go through a number of steps, after which the person tells you what your result is, and gee whiz, they are right! Climate “science” does essentially the same thing. No matter what the question, or starting point is, the “answer” will always be “we’re doomed”.
“They trained a deep neural network to learn from a simulation …”
Like trying to learn about geopolitics by having the deep neural network learn by watching episodes of Game Of Thrones.
Mathematical onanism.
They should stop before they go blind.
There is no need for any evidence for models, tests, so great a danger. There is only one percent probability that we will begin to radically reduce emissions. Today, the Treaty of Paris is also completely insufficient.
The exact same thing could be said about alien invasions and a whole host of other imaginary planetary calamities. Still doesn’t make them worthy of spending one dime or ounce of effort trying to prevent them (the list being endless after all).
Machine learning is indeed the key. Alarmists themselves never learn.
Wouldn’t it be a hoot if they created this amazing AI and fed in all of the climate nonsense and it came back and told them they were full of crap?
Anthony,
You wrote, “… (an aquaplanet, or a planet with continents)…” Should that be “a planet without continents”?
Machines can just do a better job of curve-fitting than climate modelers can do. The physics of clouds is no better understood by machines than it is by humans. Using machines to deploy the same flawed physics is just a different way to make the same mistakes.
There’s this weird almost subconscious idea spreading around that using computers to make decisions for humans somehow removes bias and prejudice. It doesn’t. All computers do is make prejudice dispassionate.
I don’t recall who said this, but the heart of the matter is that where there’s artificial intelligence, there is also artificial stupidity. Given the way things work AS will be more prevalent than AI.
Yes, that was my immediate reaction to the Article before I even started to read the comments. This won’t be “Artificial Intelligence,” this will be “Artificial Stupidity,” because they’ll just be feeding the newfangled machine the same old stupid input assumptions, and at the end of the day, no matter how “sophisticated” the computer, GIGO still applies.
I majorly concur.
I guess machine-garbage-out is somehow more acceptable, because then you can blame stupid machines instead of stupid humans. But no, stupid humans TAUGHT the machines to be stupid.
You know what I find so amazing? It is that 100% of the “physics” of purported global warming (caused mostly by CO₂ it is said, but also by CH₄ (methane), N₂O (nitrous oxide), O₃ (ozone), (CHCl)^ⁿ (chlorocarbons), (CFCl)^ⁿ (chlorofluorocarbons CFCs), and certain rare but potent industrial gas byproducts) can be computed to at least 2 digits of precision … using the CPU of a laptop.
Computing this doesn’t require a floor-full of hyper-parallel processors, billions if gigabytes of memory, the power consumption of a small town or the investment of tens — nay, hundreds — of millions of dollars in machinery, operators, programmers and public relations sycophants.
Indeed: there is not ONE extant piece of documentation showing that the results painstakingly derived from these giant sub-billion dollar pterodactyls … has resulted in even 3 sig-figs of precision in prediction, or for that matter, any unusual result neither previously predicted, or entirely unexpected.
And THAT is the point.
IF (as there must, at some level) there is anthropic greenhouse gas “global warming” enhancement of the climate, THEN it has mostly been sussed out in the late 19th and early 20th centuries. Which is sobering, at any level.
The amazing thing is the duplicity, the rhetorical chest beating, gnashing of teeth, ostracizing and denigration of the World Powers who haven’t the temerity to outright mandate the cut and bleed of their economy sovereignty by way of carbon-fueling their energy needs.
Yet, on this blue-green gem of a planet, we are adding a measurable and substantial amount of CO₂ to the atmosphere every year. I don’t think anyone here seriously considers this to be fiction. It is measurable and measured — Moana Loa famously — but also at hundreds of other observation sites globally. It is real, and it is more-or-less in lockstep with the collated global production numbers for coal, natural gas and petroleum. We don’t even need “consumption figures”.
Point tho is: the effect is rather small considering the absolute degree of hyperventilation going on in the pseudo-science community. The “Save the Trees, eat a Beaver” crowd desperately need a religious iconography to worship. It used to be (embarrassingly) boreal forests disappearance (which unhelpfully have been growing faster, thicker and more solidly with increased CO₂), it used to be glowing orange forests because of toxic acid-rain.
Now it is supercomputers, billion dollar “research and grant” budgets.
And frankly, I think its just gawdawfully mendacious. Because year-over-year, the people of the planet, still very much RAISING THEMSELVES UP BY THEIR BOOTSTRAPS, are digging up more coal to burn, more petroleum to refine, ore natural gas to cook, heat and produce avidly wanted electricity. Cars go, mopeds zoom, lights turn on, smart phones are recharged, and work, the Internet allows one-and-all to learn of the whole world’s trends, knowledge, fandom and hypocrisies.
Energy.
More every year.
Yielding, inevitably, more CO₂, CH₄ and rare-but-potent byproduct greenhouse gasses.
Just mendacious to think it needs billion-dollar research (accomplishing exactly naught).
GoatGuy
And to reduce the worry even further, even if the consumption of fossil fuels seems to coincide with the CO2 increase measured at Moana Loa, it may be just coincidence. Last report I read, of the CO2 in the atmosphere, the amount contributed by humans may be as low as 4%. Further, the economic depression of 2008 resulted in a dip in fossil fuel consumption, but Moana Loa did not record a corresponding decline in the rate of CO2 increase. So, burn baby burn! Be efficient, the less money you pay to power your production the better it is for the bottom line, but otherwise, burn all you need!
Don’t forget to train it to bully anyone who questions its assumptions, methodologies, or conclusions. and throw punch cards in the air for theatrics
The proposition to use AI to improve long-term climate predictions sounds like a non-starter. The project begins with the assumption that CO2 concentration in the atmosphere is a significant cause of global warming, i.e., another “What if?” study. The researchers say the model has been “tested” with good results. What does that mean? Good results might be to match GCMs, which have all been failures in predicting short-term temperatures and untested by long-term, real world data.
Cloud formation is a chaotic process. Will AI find order in a chaotic process? A shortcoming of AI in oil exploration is that it removes “anomalies” from databases. The “anomalies” in databases might be important in locating oil accumulations. They may also be important in climate studies.
I view a project like this one as a perpetual money sinkhole, a never-ending project going in the wrong direction.
RIGHT… neural networks always in training, ever tweaking, with massively parallel current states that are best described as “um, er, just like this, see here” at any given moment because they are self-modifying… are sure to do a better job any cell based deterministic model. This is obvious because when ever we have a tough job that has to be done right, we always ask a schizophrenic to do it. /s after witnessing personality simulations
[not /s, Q:] What exactly does “global temperature sensitivity to CO2 is strongly linked to cloud representation” mean?
“ What exactly does “global temperature sensitivity to CO2 is strongly linked to cloud representation” mean?”
It means when modelers change their parameter sets to some equally plausible set (i.e., within the limits of uncertainty), their model produces a different warming trend for the identical trend in CO2.
The cloud types, cover, and persistence change with the parameter sets, no matter that the forcing trend was the same.
Changing parameters to see what the model does is called a sensitivity analysis, or more generally, a “perturbed physics” test.
Although never interpreted this way in the field, the results show that, as regards CO2 and climate, no one knows what they’re talking about.
One grain of sense in the article that should be at the top of every report on climate predictions. “We have large uncertainties in our prediction of the response of the Earth’s climate to rising greenhouse gas concentrations. The primary reason is the representation of clouds and how they respond to a change in those gases.”
They’re a bit late in trying to hitch their research to the CLimate Grant Boondoggle. There’s a new sheriff in town and he ain’t impressed.
Not yet.
The people with the most money to spend on this right now, is Google.
Their biggest motivation and market is targeting advertising to consumers with AI/machine learning.
When I searched for X-ray spectrophotometers, I suddenly found myself receiving adverts for transparent underwear.
That is the current level of AI/machine learning.
“A major challenge in current climate prediction models is how to accurately represent clouds and their atmospheric heating and moistening. ”
And I thought that the climate science was settled. Stupid me.
I’ll trust the output of an uneducated Commodore 64 over anything Michael Mann produces.
Having just come back from a conference on AI, I understand that what makes AI work is realism – deep learning allows for ever closer and more accurate representations of the structures of reality. The more accurate the representation, the better the predictive capability.