One point struck me, reading Anthony’s fascinating account of his meeting with Bill McKibben. Bill, whose primary expertise is writing, appears to have an almost magical view of what computers can do.
Computers are amazing, remarkable, incredibly useful, but they are not magic. As an IT expert with over 25 years commercial experience, someone who has spent a significant part of almost every day of my life, since my mid teens, working on computer software, I’m going to share some of my insights into this most remarkable device – and I’m going to explain why my experience of computers makes me skeptical, of claims about the accuracy and efficacy of climate modelling.
First and foremost, computer models are deeply influenced by the assumptions of the software developer. Creating software is an artistic experience, it feels like embedding a piece of yourself into a machine. Your thoughts, your ideas, amplified by the power of a machine which is built to serve your needs – its a eerie sensation, feeling your intellectual reach unfold and expand with the help of a machine.
But this act of creation is also a restriction – it is very difficult to create software which produces a completely unexpected result. More than anything, software is a mirror of the creator’s opinions. It might help you to fill in a few details, but unless you deliberately and very skilfully set out to create a machine which can genuinely innovate, computers rarely produce surprises. They do what you tell them to do.
So when I see scientists or politicians claiming that their argument is valid because of the output of a computer model they created, it makes me cringe. To my expert ears, all they are saying is they embedded their opinion in a machine and it produced the answer they wanted it to produce. They might as well say they wrote their opinion into a MS Word document, and printed it – here is the proof see, its printed on a piece of paper…
My second thought, is that it is very easy to be captured by the illusion, that a reflection of yourself means something more than it does.
If people don’t understand the limitations of computers, if they don’t understand that what they are really seeing is a reflection of themselves, they can develop an inflated sense of the value the computer is adding to their efforts. I have seen this happen more than once in a corporate setting. The computer almost never disagrees with the researchers who create the software, or who commission someone else to write the software to the researcher’s specifications. If you always receive positive reinforcement for your views, its like being flattered – its very, very tempting to mistake flattery for genuine support. This is, in part, what I think has happened to climate researchers who rely on computers. The computers almost always tell them they are right – because they told the computers what to say. But its easy to forget, that all that positive reinforcement is just a reflection of their own opinions.
Bill McKibben is receiving assurances from people who are utterly confident that their theories are correct – but if my theory as to what has gone wrong is correct, the people delivering the assurances have been deceived by the ultimate echo chamber. Their computer simulations hardly ever deviate from their preconceived conclusions – because the output of their simulations is simply a reflection of their preconceived opinions.
One day, maybe one day soon, computers will supersede the boundaries we impose. Researchers like Kenneth Stanley, like Alex Wissner-Gross, are investing their significant intellectual efforts into finding ways to defeat the limitations software developers impose on their creations.
They will succeed. Even after 50 years, computer hardware capabilities are growing exponentially, doubling every 18 months, unlocking a geometric rise in computational power, power to conduct ever more ambitious attempts to create genuine artificial intelligence. The technological singularity – a prediction that computers will soon exceed human intelligence, and transform society in ways which are utterly beyond our current ability to comprehend – may only be a few decades away. In the coming years, we shall be dazzled with a series of ever more impressive technological marvels. Problems which seem insurmountable today – extending human longevity, creating robots which can perform ordinary household tasks, curing currently incurable diseases, maybe even creating a reliable climate model, will in the next few decades start to fall like skittles before the increasingly awesome computational power, and software development skills at our disposal.
But that day, that age of marvels, the age in which computers stop just being machines, and become our friends and partners, maybe even become part of us, through neural implants – perfect memory, instant command of any foreign language, immediately recall the name of anyone you talk to – that day has not yet dawned. For now, computers are just machines, they do what we tell them to do – nothing more. This is why I am deeply skeptical, about claims that computer models created by people who already think they know the answer, who have strong preconceptions about the outcome they want to see, can accurately model the climate.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

A world climate computer model based on real scientific principles and relationships would be great. Unfortunately, most of today’s models are based on mathematical approximations, not real physical laws. The ultimate program should be able to take the current data from the field and predict future conditions, But, only one set of data wouldn’t do. Each day, as more sets of data are entered, the program would LEARN which way parameters were changing and how fast. Predictions should then improve. But, the program would still be dumb, as it has not yet seen conditions that are not constantly changing in one direction, have cyclical changes, random changes, or mixtures of all three trends. As the system learns the range and possibilities of all of this, with experience, it would become smarter and smarter, based completely on how long it has to learn and the number of major factors it has to consider in it’s calculations. As there are probably more than 75 major influences on climate and the Earth is constantly changing, even continental drift has to be taken into account.
Well sure, this is done all the time and is the entire point of ‘Big Data.’ But it’s worth repeating: Correlation is not causation. And these are correlative devices. But even then, if there are multiple unknowns then a lack of correlation doesn’t demonstrate a lack of causation for anything but the entire set of unknowns. There are limited ways to deal with these issues, but there are no silver bullets.
There are models very useful for pilots, for example. NAIRAS. Hardly anyone knows about them. You can not see their in the media. Strong GCR can easily damage your computer in the airplane.
Very goodo perate models of temperature and pressure in the stratosphere over the polar circle. Useful for weather forecast. Who knows about them?
Computers are magic, modelers are magicians, and scientists, in the post-normal era, are sorcerers. The uniform theory of science, that conflated the logical domains, was established through liberal assumptions of uniformity and continuity, reasoning through inference, and exaggerated significance of corollary evidence, outside of a limited frame of reference.
The debate in this thread is largely a dispute between people with various levels of expertise in various disciplines of IT and its application to modeling. That’s not the big picture issue.
The big picture is that the vast majority of people have no experience (except as an end user) in IT or in modeling. They can certainly judge the output of their computer generated bank statement, and because their experience is that only in the rarest of occasions is such output wrong (or pick from one of many other examples, this is hardly the only one in our daily lives) it becomes natural to assume that ALL computer generated outputs are similarly accurate. Thus, computer modelling becomes, for the public at large, the ultimate Appeal to Authority, such Authority demonstrably undeserved in the case of climate models.
“largely a dispute between people with various levels of expertise in various disciplines of IT and its application to modeling”
Quite correct, in my field the IT folks are responsible for setting up networks, servers, file mapping and storage and security. No one in IT uses the computers to perform modelling (well they might use a spreadsheet to project how many new computers will be needed if we hire 100 people).
In engineering the use of computers for modelling is only one more tool in the toolbox. It does not replace a comprehensive understanding of the underlying physical phenomena. An a computer model is never used without full verification in incremental steps along the way.
An embarrassing example of the frailty of computer models, about 10 years ago a large airplane manufacturer attempted to compute the length of all the cables needed inside a new “fly by wire” airplane design. They even ordered all the cables manufactured to exact lengths with connectors on each end before the first plane was built. Once they attempted to assemble the airplane it turned out that most of the cables where just a “wee bit” too short. The folks designing the cables used a 2-D model of the plane while everybody else was using a 3-D model of the plane…..
It took almost a year to figure out the correct length of all the cables and re-fabricate them. Nobody wants a “fly by wire” airplane with “splices” in the wiring, go figure.
If modern computer models of relatively simple stuff like wire lengths supported by lots of IT professionals can’t “project” the correct length of a bunch of wires who the heck really believes thy can tell us the weather in the year 2100 ???
Oh, that airplane would be the Airbus A380 the largest commercial passenger plane yet, the CEO (almost?) lost his job over the screw up.
Cheers, KevinK.
If I recall correctly, the tailplane section of the A380 was “modelled” in a computer and built from composite materials. It needed reworking after actual flitys because of some stress fractures that modelling did not expose.
I’m reminded of the “Right Climate Stuff” team. Who described the CAGW campaign as nothing more than an unproven computer model combined with a lot of speculation. If you take away the computer model then what is left?
I totally agree with you, that computers have limitations and climate modelling may not be accurate. I think that it’s much more important to understand climate, what are the mechanisms of climate change, what happened in the past and lead to present. Everybody is preoccupied by the climate change, but it is useless to discuss only the future without understanding the main cause of the climate transformation. My opinion is that the ocean and human activity on the ocean (mostly naval wars) has a big contribution in the matter. Aren’t we ignoring that? Shouldn’t we pay more attention to the ocean?
Let’s be clear here. Models can be very valuable. Programmers can, and do, do their jobs right. But there are at least four major questions here:
– How closely does the model approximate reality?
– How good is the *implementation* of the model (a programming issue)?
– What is the quality and resolution of the real world input data into the model?
– When the implemented model is run, how closely does it actually approximate reality, especially when historical data is entered and compared to later known historical data?
Each of these areas is big and important in its own right. But all of them can be trumped by another question:
– Are those dealing with the model honest?
Dishonesty injected into a complex problem means that it’s guaranteed to no longer correspond to reality. Indeed, that is what dishonesty means: that one states one thing, while knowing that the reality is something else.
It is possible for a computer program to emit a pre-determined result, but that is not the way a well designed and well implemented model would function. That’s possible if *fraud* and *dishonesty* are involved, and there are many ways to be fraudulent and dishonest in the midst of such complexity.
I think somebody brought up the case of the classic “Game of Life”, a cellular automata program with very simple rules that can create breathtakingly complex dynamic structures from very simple starting input. Nobody could predict the final structure of an arbitrary random input in say 10 million generations by just looking at the input.
A “climate modeler” can at least be taken seriously if all four of those major points are *honestly* considered. Not accepted at face value, but at least taken seriously, if they are open to critiques as all real scientists are. But if dishonesty is involved, nothing about what they’re doing should be taken seriously.
At minimum, to be taken seriously, a modeler should publicly present detailed documentation on all of the above four points, including the source code to any custom programs used.
Thanks, Eric Worrall, I hope you are right.
If only we knew how the Earth’s oceans/weather system works!
Proof that we don’t is in the models themselves, in how they fail.
Programmers and computers actually do a good job, but in this case, Garbage-In -> Garbage-Out. And the job is very expensive.
I have been 30+ years writing computer code, software does not do sciences and does not generate scientific data. In science a computer programs only express a hypothesis.
Many people who do not understand software, confuse doing engineering using computers and doing science with computers, the latter is a fallacy.
p.s. claiming computer code via statistical methodology does science is particularly absurd.
Looking for some recent climate model code, I can across: http://www.cesm.ucar.edu/models/ccsm3.0/ccsm/doc/UsersGuide/UsersGuide/node9.html which has a ‘working’ model you can download and run.
I can see one problem straight away in the help section: ‘Software Trapping: Short CCSM runs are carried out regularly with hardware and software trapping turned on. Often, such trapping will detect code that produces floating-point exceptions, out-of-bounds array indexing and other run-time errors. Runs with trapping turned on are typically much slower than production runs, but they provide extra confidence in the robustness of the code. Trapping is turned on automatically in these test cases via setting $DEBUG to TRUE in env_run. Test case names begin with “DB”.’
So when a long simulation is performed, all error checking is disabled!!! I wonder how many errors are missed?
As an indication of the openness and transparency within the climate research community, I note that a ‘current and state of the art’ simulator is available: http://www.cesm.ucar.edu/models/cmip6.html
The CMIP6 ‘product’ funded by Government/public money is only available to those ‘within the circle of trust’:
‘Access to developmental versions of CCSM and its component models can be requested by filling out an online form. (Access is restricted to only CCSM developers and close collaborators.)’
I wonder what they want to keep from prying eyes?
Allowing public read access to their ftp server is trivial.
This is entirely typical of the entire CACCA ho@x modus operandi.
It is not science. It is totalitarian group think propaganda and indoctrination. Any so-called scientist participating in this charade is a charlatan.
So you think there is a global conspiracy to promote this hoax. Do you also think the moon landing was a hoax? You know, you can’t trust those government-funded types, regardless which country or continent they come from. We are all in it together.
As an IT expert. It has always been interesting to witness the blind faith what level of confidence the non “IT Hoi Polloi” place in those, sometimes completely unreliable programs, links and mathematical calculations (even small spreadsheets must contains some checking balances). When one banks on-line, one assumes that the transaction will be carried out as requested. However, when it sometimes does not, we are never really that surprised, are we !..
Someone has failed to do their work properly.
Such is the method used in climate models. One can predict the outcome before the outcome is demonstrated. How can any thinking person spend their entire life stating that it works with “95% accuracy”, when they are well aware that the outcome has already been roundly, soundly and clearly, manipulated. That is not science or common sense. That borders on ignorance, politics, dishonesty and obvious stupidity. Well done Bill and Co.
I agree with the view that many people, but maybe especially AGW folks, have a magical view of the computer. It seems to be a view based on the star trek, rather than reality.
I agree with people who are saying that models have minimal software engineering in them. I also agree with people (Leif?) who point out that the n-body problem is similar in complexity. I pointed this out in the discussion on Chaos, and the general response was “WTF?”. I’m surprised that people underestimate the n-body problem and over-estimate the climate problem. People are quite willing to suspend disbelief for one, but not the other. We can’t simply say “computer, verify voice print, Picard alpha beta prime; what is the mass of the sun?”. They are both extremely difficult problems, but seem not to be beyond human understanding.
However, I’m seriously amazed that except for Notanist, no one has disagreed with the technological singularity ideas put forth. It’s pure irony that the author wrote it, and that no one else responded, because it exactly confirms the premise that people have a magical view of computers.
I’m reminded of the Life of Brian, where a messiah figure says “Blessed are the cheese makers”. A listener says “what’s so special about the cheese makers”. A haughty know-it-all turns around and says “It’s not meant to be taken literally, he’s referring to all manufacturers of dairy products”.
In short, we (humanity) has absolutely no idea about what causes consciousness. Whatever it is, it’s the source of thinking. Therefore, we have no idea about how to create such a thing. As far as we know, we’re limited to traveling below the speed of light AND we’re limited to computers that execute software. As impressive as they are at that task, they will never think.
VikingExplorer,
From a Post-Modern perspective, what is “think”?
If it works for the Terminators, how would you argue?
MCourtney, your question confirms my thesis. As someone who has been writing software for 37 years, I know one thing: In order to implement functionality in software, we need to understand the algorithm first. Since as you say “what is think?”, it confirms that we have no idea how implement the thinking algorithm.
Actually, the Terminators were implemented fairly realistically (apart from time travel and liquid metal part). They mindlessly executed their software program. The most far fetched portrayal is the doctor on Star Trek Voyager.
“VikingExplorer
June 7, 2015 at 2:48 pm
I’m reminded of the Life of Brian, where a messiah figure says “Blessed are the cheese makers”. A listener says “what’s so special about the cheese makers”.”
No! A listener mishears what the messiah is saying thus ensues the comedy.
Right, we don’t actually hear what the messiah figure said. It doesn’t change the point.
Correct. It is misheard, that is what I said. And thus the comedy ensues.
I think the computer model issue with climate models is all related to economics. If the purpose of the climate models was to sell them to people who needed them to make accurate predictions they would correlate the models so they matched past, known, measured data, so predictions of the future are more believable. Since there is little competition for selling the climate models to a market that needs correct predictions, there is no motivating driver to make the models match reality. What the climate models do is generate data that determines the need for budget levels related to studying and dealing with global warming. If the climate models predicted insignificant global warming then they would essentially be making a case for decreased funding for climate studies and for eliminating the IPCC, because if there is no climate change expected, why would we spend money to run the IPCC? Workers want to drum up work and managers wants to grow their empire, they aren’t going to adjust the unknowns in their computer model inputs so they produce data that says “You don’t need me, what you hired me to deal with is not a problem”.
Computers can only add, subtract, and rotate bitwise binary numbers. This is done at the lowest machine level, and anything done by high level programming languages compiles to combinations of these.
And push bits around – a very important function! And make simple comparisons and change the program flow based on that.
A lot of computers and most high level languages can’t rotate bits around in register, they just support various forms of shifting data. Good enough.
lsvalgaard said, “Computer models are not built to produce a desired answer, but to show us what the consequences would be of given input to a system of equations, either derived from physics or from empirical or assumed evidence.”
Wow! I’m not sure if he is saying that computer models CANNOT be built to produce a desired answer, or that computer programmers are made of greater stuff than ordinary humans and would never stoop to do such a thing. I can guarantee that if there is money in it, a programmer can be found to get you your desired answer. In the real world, there is often a big difference between what “should” be the case and what is the case.
But even if a programmer is trying to be as accurate as possible, and gets all the known physics right, any complex model will involve “assumed evidence” that is easily influenced by biases. Not only that but after initial runs, assumptions programmed into a model can be easily adjusted or tuned precisely because they are just assumptions. You can’t tell me that climate modelers do not tune their models when initial outputs are off from what was “expected.” We know climate models contain assumptions that go beyond known physics because otherwise, all the models would produce the same output. If these models were calculating a flight path to Pluto, which one would you trust? Any of them?
We also have good reason to believe that climate-model programmers are biased because they have been very slow to correct their models to match the real world after they have been shown to exaggerate warming. Instead, they claim their models may not be exact in the short term but will be proven accurate in the distant future – after they are long retired.
In the nuclear T/H modeling world, no one believed the model results produced by anyone else, but everyone believed the experimental results from the test facilities , except for the guy who did the experiments
funny and scary at the same time.
lsvalgaard
June 7, 2015 at 1:52 pm Edit
Ah, but Dr Svalgaard, how can we defend the ultimate 1/10 of one degree world-wide temperature claimed accuracy of the average of thousands of runs of different computer models in a world 100 years (and 100 of quin-trillions of approximate calculations from today’s input approximations) when their basic input has never been used to calculate the top-of-atmosphere radiation input – much less radiation balance?
Today’s model bureaucrats – and the politicians who fund their religion and their biases and faith – have been running global circulation models since before Hansen’s Congressional fraud back in 1988 by opening windows to the DC heat and humidity the night before.
But, they have never re-run their models with the correct top-of-atmosphere solar TSI radiation levels.
As you have several times pointed out, the solar groups have had persistent problems calibrating the TSI (total solar irradiance) levels at minimum each solar cycle. Today’s textbooks, on-line class notes, today’s government websites and their bureaucratic papers, and all of the recent climate papers I’ve read since 1996 abound with different TSI values. But somehow, the average of every model run since 1988 has come back with the same IPCC approved 3.0 watts/m^2 per CO2 doubling.
Seems strange when the cyclic low of the sunspot cycles since 1987 has dropped from 1371.5 down to 2011’s 1361.0. And the old climate models have never been re-run with the changed TSI values. If dropping 10 watts/m^2 at TOA does “nothing” over 100 years, why should we believe any of the modeled predictions between 1988 and 2015?
http://spot.colorado.edu/~koppg/TSI/TSI.jpg
Jeez, the drop is because the older instruments were plagued by scattered light that let more light into the cavity and hence resulted in too high readings. This is all well-understood and is corrected for.
lsvalgaard.
No doubt. And, yes, the solar side of the “science” knows its business. The change – which we have discussed before – must be judged appropriate and correct.
Top of atmosphere TOA solar radiation MUST now be based on a yearly average of TSI = 1361.x at the low point of every solar cycle. Whether the climastrologists even deem it worthy of correcting for the ups and downs of the possible solar cycles between today’s low of cycle 24, 25, or 26 (much less guess for cycles 27, 28, 29, or 30) is beyond knowledge. And they are cleverly NOT saying.
Now.
Tell the climate astrologist modelers. They have NOT re-ran their models using the accurate (actual) lower TSI that is actually correct. Thus, NO model prediction for 2100 is accurate anymore. At ANY level of CO2 assumptions – now, in 1972, 1983, 1996, 1998, 2008, 2015, in 2050, in 2075, nor 2100.
So, from the above plot you gather that actual absolute TSI has changed by ~10w/m^2?
So, ERB@1371.5, ACRIM@1367, NOAA9@1365 and NOAA@1354 which all happened at the exact same time, suggests what? Calibration issues.
Same goes for any overlapping segments, the Sun didn’t just magically change to two different TSI values at the exact same time.
Also remember to multiply TSI by 0.7 and divide by a factor of four.
“Initially disregarded by the community as an error in the TIM instrument, this difference has recently been shown to be due to uncorrected scatter causing erroneously high measurements by other instruments, all of which have an optical design that differs from the TIM by allowing two to three times the amount of light intended for measurement into the instrument.”
“Offsets are due to instrument calibration differences.” (partial caption from Figure 1):
http://www.swsc-journal.org/articles/swsc/abs/2014/01/swsc130036/swsc130036.html
ICU
No. Dr Svalgaard’s statement is that every “older” measurement is inaccurate (calibration problems as you appear to phrase it ?) and that the actual TSI the entire period is – and has been constant – at 1361 watts/m^2.
Dividing by 4 (to create a mythical “average whole earth radiation level” and then assuming an average 0.70 atmosphere absorption factor only adds more inaccuracies into your assumptions.
But you have to do that to get from solar TSI to a metric covering the entire surface of the Earth, that factor reduces that ~1360 number by a factor of almost six (regardless, the AOGCM’s do account for this properly).
But since you know climate science much better then purported ‘politicized’ climate scientists, suggest something better, other than sticking one’s head in the sand.
BTW, both of the above quotes in my previous post, were from the same website as the image you posted above.
I’ve waded through the comments and feel obliged to weigh in. I have more than forty years in the business of building both hardware and software, an undergraduate degree in physics, and an MS in Operations Research (specializing in computer science).
The reason we “model” is to simplify complex problems so that we can “tweak” inputs and gain an understanding of how whatever we’re modeling would behave in the real world. If a model isn’t predictive, it’s wrong.
One very effective form of modeling is systems’ simulators (think of flight simulators or refinery simulators). These work very well — until they don’t. And, they’re enormously valuable when they do work.
The problem with climate models is that they simply don’t work. They have no predictive value. The ones that get funded presume catastrophic results based on small increases in ambient CO2 and to date, this hasn’t panned out.
In 1984 I was asked to write a program in Fortran IV coding a polygonal approximation to a discrete magnetized body of N sides. My boss gave me the paper and asked “can you do it?”. I answered “yes, but this is a tough problem.” His reply was “coding is not the tough part. Try convincing the Earth that it is a polygon”. My take away was that we are always forcing the Earth (through modeling) to be something it is not.
Many commenters keep repeating that ‘computers only do what they are told to do and don’t [and can’t] invent things on their own. THAT is precisely why computer models are trustworthy. They can be relied upon to follow instructions and not screw up on their own. Of course, the instructions have to be correct, but that is a people problem, not a computer problem.
“Many commenters keep repeating that ‘computers only do what they are told to do and don’t [and can’t] invent things on their own. THAT is precisely why computer models are trustworthy. They can be relied upon to follow instructions and not screw up on their own. Of course, the instructions have to be correct, but that is a people problem, not a computer problem”.
No one here is saying that computers do not follow instructions or that they screw up on their own. NO ONE. What everyone is saying is that PEOPLE ARE THE PROBLEM. People who program computers based on the “people-ish” theories, assumptions, information given to them by other people! It’s the POINT that Eric tried to make in the article that prompted this thread!
“First and foremost, computer models are deeply influenced by the assumptions of the software developer. ”
If PEOPLE do not know how “Earth climate” works exactly, then they cannot produce “instructions that are correct”. And every aspect of the programming that isn’t specific, or has variables, then interacts with other aspects of the programming and the errors expand exponentially. The climate system is a chaotic system, not a predictable machine with a small range of possible outcomes. How to you limit the range of expectations when you don’t even know the ranges of all of the components…or even IF you know ALL of the components in the first place?
No, you have not understood anything. Constructing a model is a scientific enterprise and as such is subject to the scientific method, that is: the physics used and the assumptions made must be justified in communication with other scientists and with experiments. They are not the ‘personal opinion’ of the constructor, and they do not just ‘influence’ the construction, they determine the model. There is nothing left of the person in the model, it is pure and hard science. It may not be correct in the end, but should be the best we can do as a collective at this time. Models can differ in details [and that is a good thing] but not in large-scale structure, as the physical laws are universal. The peer-review system will in the end accept, reject, or modify the model(s). Science is self-correcting, bad stuff is eventually snuffed out.
Surely the only thing wrong with Isvalgaard’s statement below is the use of the positive – as in “constructing a model is [italicized] a scientific enterprise” – instead of the normative as in “constructing a model ought to be a scientific enterprise”. It seems to us that the force of Isvalgaard’s defence – and this is one of the longest series of straight bats ever – depends on this one thing. We are sure that we all agree that normatively [italicized], what Isvalgaard says, is absolutely right – indeed, it is unlikely to be anything else coming from such an experienced and successful scientist.
But debate becomes less exciting if the distinction between positive and normative is carefully observed – and the entertaining exchanges about the original post here would have been denied to us.
lsvalgaard
June 7, 2015 at 7:50 pm
“…Constructing a model is a scientific enterprise and as such is subject to the scientific method, that is: the physics used and the assumptions made must be justified in communication with other scientists and with experiments.”
Most science using models do not use the results to make major political and economic policy. Unfortunately in climate modeling this is all too true. With so much money and so many reputations on the line, the scientific method has been compromised.
I think Leif’s statement is correct as written, without the addition of “ought”. According to the best definition of science:
My bolding. Even though I am totally anti-AGW, I cringe when people on my side over state the case. Examples are claiming that Climatology is inherently impossible or that climate modeling is impossible or demonizing existing climate modelers.
I would speculate that most existing climate models are using the wrong approach, but it seems obvious to me that they are in the “pursuit” of knowledge, using a systematic methodology.
“THAT is precisely why computer models are trustworthy.”
With respect; “THAT is precisely why computer models are TOTALLY UNtrustworthy.”
If you do not verify the predictions/projections/guesses that your computer model produces you are only fooling yourself, and everybody else will (eventually) figure it out.
Cheers, KevinK.
“The peer-review system will in the end accept, reject, or modify the model(s). Science is self-correcting, bad stuff is eventually snuffed out.”
With all due respect, Mother Earth has reviewed all of your “peer reviewed” science and found it extremely lacking….
None of the future temperature predictions of Arrhenius, Callendar, Hansen, Trenberth or Mann have even remotely come to fruition. After more than a century it is far past time for real science to “snuff out the bad stuff”.
Cheers, KevinK.
You can do that if you can replace it with something better. Show me the better replacement and we can talk some more.
“You can do that if you can replace it with something better. Show me the better replacement and we can talk some more.”
Ok, here is something better; the climate (aka the average temperature) of the Earth is incredibly complex and is predominately determined by the total radiation input from the Sun (UV, Visible, IR, Electromagnetic, cosmic rays, etc) AND the thermal capacity of the Oceans. That’s IT, it is really that simple and all the folks that think they have a “handle” on this incredibly complex system of interactions (yes INTERACTIONS, NOT FEEDBACKS) are fooling themselves.
Here is my replace it with something better; “WE JUST DON’T KNOW”. Anybody that claims otherwise is only fooling themselves (and some other folks) for a while.
Cheers, KevinK.
And as the Sun has not had any long-term variation the last 300 years, the climate shouldn’t either. As simple as that, apparently. And the ‘we don’t know’ thing won’t work either as you then allow for CAWG [we just don’t know it].
Seems like despite all the talk, the climate has been similarly devoid of large variation in the last 300 years.
Things on climatic modelling collected by a non-IT-specialist and non-GCM-specialist (but with some simulation experience)
In 1988 I was at a plant physiology seminar at a university in USA. In discussion the presenter mentioned that he had seen the code for the “latest and greatest” world climate model. And he was horrified. Plant evapotranspiration was included, but was being handled by a model of one plant stomata with the results extrapolated to the world!
In early 2000’s I was at another seminar with people around the current version of the same scene in attendance. I mentioned the above, with the hope that bigger and better computing and more knowledge had improved the scene. And was told that, it anything, it was worse.
So my take is that the purveyors of such simulations ought to not only say what is being included but demonstrate how and how well it is being included in their efforts.. And “nullius in verba”.
Once, a person was called a “computer”. I think that was in the 17th or 18th century and for accounting requirements. My memory fades.
If there are any Aussies here, computers executed actions and a major incident at a major bank starting July 26th 2012 resulted. That incident was initiated by a human (A workmate of mine believe it or not). And another human, me, had to fix it (Thanks Daniel – your actions destroyed my life).
[Mid-1940’s – The women “calculators” running the artillery calculations for the war department were “computers” …mod]
“Once, a person was called a “computer”.”
During the early stages of WWII women where employed by the US Army as “computers” to calculate “trajectory tables”. These were a list of tables that “projected” where an artillery shell would land after being “shot” at a particular angle from a gun of a certain size (i.e., muzzle velocity). Relatively simple mathematics, but lots of labor to fill out a complete table with information for every 1 or 2 degrees of gun/mortar elevation.
One of the first uses of “digital computers” (i.e. the original UniVac ™ digital computer) was to replace those slow “human computers” with electronics.
Cheers, KevinK
That’s kewl info. Thanks!
Problems which seem insurmountable today – extending human longevity, creating robots which can perform ordinary household tasks, curing currently incurable diseases, maybe even creating a reliable climate model
This statement inadvertently shows the problem. First, it shows the biases of the writer .. I mean .. what makes anybody think that an artificially intelligent machine will have the same priorities as humans. A machine may consider finding a continuous supply of energy, it’s life’s blood, more important than breathable air or clean water.
The designer of a model, who thinks CO2 levels are important, may not put the same weighting on other gases.
No, I haven’t read all the comments here, but I wrote this below about 6 or 7 years ago on WUWT.
Remains true today. We only know if things are true or not when we compare results to the real world.
“I was reading above where you guys are proving 2+2=5 etc., now consider this, and Google “Why the Pentium can’t do maths”
The presence of the bug can be checked manually by performing the following calculation in any application that uses native floating point numbers, including the Windows Calculator or Microsoft Excel.
The correct value is (4195835 / 3145727) = 1.333820449136241002
However, the value returned by the flawed Pentium would be incorrect beyond four significant digits
(4195835 / 3145727) = 1.333739068902037589
Another test can show the error more intuitively. A number multiplied and then divided by the same number should result in the original number, as follows:
(3145727 x 4195835) / 3145727 = 4195835
But a flawed Pentium will return:
(3145727 x 4195835) / 3145727 = 4195579
Also consider this, the bug above is known and was discovered by accident, how many bugs in the floating point processor are still un-discovered ?
And again, consider this, super computers are made up of “off the shelf” Intel processors. Older computers used P4’s. The Tianhe-1A system at National University of Defence Technology has more than 21,000 processors. (Google “supercomputer).
So after millions and millions of calculations done in a climate model with the error getting greater, done by faulty math processors with an unknown “bug”, who is going to guarantee the end result ?
We have become too reliant in the ability of these machines to be accurate. Blind faith ?”
Your car has an embedded Pentium processor.
probably why it gives me problems.
Next car will be made in Japan !
Just tried it. The bug has been fixed. I’m truly offended by the logic:
premise: bugs exist ==> mistakes happen
conclusion: models are unreliable ==> science is impossible
This kind of fallacious logic is far more dangerous than an undiscovered bug.