An IT expert's view on climate modelling

Computer circuit board and cd romGuest essay by Eric Worrall

One point struck me, reading Anthony’s fascinating account of his meeting with Bill McKibben. Bill, whose primary expertise is writing, appears to have an almost magical view of what computers can do.

Computers are amazing, remarkable, incredibly useful, but they are not magic. As an IT expert with over 25 years commercial experience, someone who has spent a significant part of almost every day of my life, since my mid teens, working on computer software, I’m going to share some of my insights into this most remarkable device – and I’m going to explain why my experience of computers makes me skeptical, of claims about the accuracy and efficacy of climate modelling.

First and foremost, computer models are deeply influenced by the assumptions of the software developer. Creating software is an artistic experience, it feels like embedding a piece of yourself into a machine. Your thoughts, your ideas, amplified by the power of a machine which is built to serve your needs – its a eerie sensation, feeling your intellectual reach unfold and expand with the help of a machine.

But this act of creation is also a restriction – it is very difficult to create software which produces a completely unexpected result. More than anything, software is a mirror of the creator’s opinions. It might help you to fill in a few details, but unless you deliberately and very skilfully set out to create a machine which can genuinely innovate, computers rarely produce surprises. They do what you tell them to do.

So when I see scientists or politicians claiming that their argument is valid because of the output of a computer model they created, it makes me cringe. To my expert ears, all they are saying is they embedded their opinion in a machine and it produced the answer they wanted it to produce. They might as well say they wrote their opinion into a MS Word document, and printed it – here is the proof see, its printed on a piece of paper…

My second thought, is that it is very easy to be captured by the illusion, that a reflection of yourself means something more than it does.

If people don’t understand the limitations of computers, if they don’t understand that what they are really seeing is a reflection of themselves, they can develop an inflated sense of the value the computer is adding to their efforts. I have seen this happen more than once in a corporate setting. The computer almost never disagrees with the researchers who create the software, or who commission someone else to write the software to the researcher’s specifications. If you always receive positive reinforcement for your views, its like being flattered – its very, very tempting to mistake flattery for genuine support. This is, in part, what I think has happened to climate researchers who rely on computers. The computers almost always tell them they are right – because they told the computers what to say. But its easy to forget, that all that positive reinforcement is just a reflection of their own opinions.

Bill McKibben is receiving assurances from people who are utterly confident that their theories are correct – but if my theory as to what has gone wrong is correct, the people delivering the assurances have been deceived by the ultimate echo chamber. Their computer simulations hardly ever deviate from their preconceived conclusions – because the output of their simulations is simply a reflection of their preconceived opinions.

One day, maybe one day soon, computers will supersede the boundaries we impose. Researchers like Kenneth Stanley, like Alex Wissner-Gross, are investing their significant intellectual efforts into finding ways to defeat the limitations software developers impose on their creations.

They will succeed. Even after 50 years, computer hardware capabilities are growing exponentially, doubling every 18 months, unlocking a geometric rise in computational power, power to conduct ever more ambitious attempts to create genuine artificial intelligence. The technological singularity – a prediction that computers will soon exceed human intelligence, and transform society in ways which are utterly beyond our current ability to comprehend – may only be a few decades away. In the coming years, we shall be dazzled with a series of ever more impressive technological marvels. Problems which seem insurmountable today – extending human longevity, creating robots which can perform ordinary household tasks, curing currently incurable diseases, maybe even creating a reliable climate model, will in the next few decades start to fall like skittles before the increasingly awesome computational power, and software development skills at our disposal.

But that day, that age of marvels, the age in which computers stop just being machines, and become our friends and partners, maybe even become part of us, through neural implants – perfect memory, instant command of any foreign language, immediately recall the name of anyone you talk to – that day has not yet dawned. For now, computers are just machines, they do what we tell them to do – nothing more. This is why I am deeply skeptical, about claims that computer models created by people who already think they know the answer, who have strong preconceptions about the outcome they want to see, can accurately model the climate.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

615 Comments
Inline Feedbacks
View all comments
MattS
June 7, 2015 6:48 am

I have close to 20 years of experience working as a computer programmer. In general, I think Eric Worrall has it dead on.
However, there is one small piece I have an issue with.
“it is very difficult to create software which produces a completely unexpected result.”
No, for most people it’s not that difficult. A computer is an idiot savant. As Eric says it will do what every you tell it to. However, he leaves off that it will take what ever you tell it to do very literally.
Thinking literally enough to write good software is a skill that a programmer must learn and cultivate. Most people can’t do it. They tell the computer to do X and then are shocked when it does X, because they really meant Y.

June 7, 2015 6:51 am

Computer modelling is inherently of no value for predicting future global temperature with any calculable certainty because of the difficulty of specifying the initial conditions of a sufficiently fine grained spatio-temporal grid of a large number of variables with sufficient precision prior to multiple iterations. For a complete discussion of this see Essex: https://www.youtube.com/watch?v=hvhipLNeda4.
Models are often tuned by running them backwards against several decades of observation, this is
much too short a period to correlate outputs with observation when the controlling natural quasi-periodicities of most interest are in the centennial and especially in the key millennial range. Tuning to this longer millennial periodicity is beyond any computing capacity when using reductionist models with a large number of variables unless these long wave natural periodicities are somehow built into the model structure ab initio.
This is perhaps the greatest problem with the IPCC model approach. These forecasts are exactly like taking the temperature trend from January – July and then projecting it forward linearly for 20 years or so.
In addition to the general problems of modeling complex systems as above, the particular IPCC models have glaringly obvious structural deficiencies as seen in Fig1 (fig 2-20 from AR4 WG1- this is not very different from Fig 8-17 in the AR5 WG1 report)
The only natural forcing in both of the IPCC Figures is TSI, and everything else is classed as anthropogenic. The deficiency of this model structure is immediately obvious. Under natural forcings should come such things as, for example, Milankovitch Orbital Cycles, lunar related tidal effects on ocean currents, earth’s geomagnetic field strength and most importantly on millennial and centennial time scales all the Solar Activity data time series – e.g., Solar Magnetic Field strength, TSI, SSNs, GCRs, (effect on aerosols, clouds and albedo) CHs, MCEs, EUV variations, and associated ozone variations and Forbush events.
The IPCC climate models are further incorrectly structured because they are based on three irrational and false assumptions. First, that CO2 is the main climate driver. Second, that in calculating climate sensitivity, the GHE due to water vapor should be added to that of CO2 as a positive feed back effect. Third, that the GHE of water vapor is always positive. As to the last point, the feedbacks cannot be always positive otherwise we wouldn’t be here to talk about it. For example, an important negative feed back related to Tropical Cyclones has recently been investigated by Trenberth, see: Fig 2 at
http://www.cpc.ncep.noaa.gov/products/outreach/proceedings/cdw31_proceedings/S6_05_Kevin_Trenberth_NCAR.ppt
Temperature drives CO2 and water vapor concentrations and evaporative and convective cooling independently. The whole CAGW – GHG scare is based on the obvious fallacy of putting the effect before the cause. Unless the range and causes of natural variation, as seen in the natural temperature quasi-periodicities, are known within reasonably narrow limits it is simply not possible to even begin to estimate the effect of anthropogenic CO2 on climate. In fact, the IPCC recognizes this point.
The key factor in making CO2 emission control policy and the basis for the WG2 and 3 sections of AR5 is the climate sensitivity to CO2. By AR5 – WG1 the IPCC itself is saying: (Section 9.7.3.3)
“The assessed literature suggests that the range of climate sensitivities and transient responses covered by CMIP3/5 cannot be narrowed significantly by constraining the models with observations of the mean climate and variability, consistent with the difficulty of constraining the cloud feedbacks from observations ”
In plain English, this means that the IPCC contributors have no idea what the climate sensitivity is. Therefore, there is no credible basis for the WG 2 and 3 reports, and the Government policy makers have no empirical scientific basis for the entire UNFCCC process and their economically destructive climate and energy policies.
A new forecasting paradigm should be adopted. For forecasts of the coming cooling based on the natural 60 year and more importantly the quasi-millennial cycle so obvious in the temperature records and using the neutron count and the 10Be record as the most useful proxy for solar “activity ” check
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html

Reply to  Dr Norman Page
June 7, 2015 7:13 am

Cyclomania is worse than computer models…

Reply to  lsvalgaard
June 7, 2015 8:29 am

Using the obvious periodicities in the data is simple common sense. Look at the Ephemerides for example. Why you choose to ignore periodicities shorter than the Milankovic cycles is simply beyond my comprehension. – A case of cyclophobia? For the Quasi-Millennial cycle see Figs 5-9 at the last link in my comment .
We are certainly past the peak of the current interglacial.Figs 5 and 9 together certainly suggests that we are perhaps just approaching just at or just past a millennial peak in solar activity. See the decline in solar activity since 1991 ( Cycle 22 ) in Figs 14 and 13.
I agree we don’t know for sure that we are just passing a 1000 year peak but it is a very reasonable conjecture. or working hypothesis.For how long do you think the current trend toward lower solar activity i.e lower peaks in the solar cycles will continue and on what basis would you decide that question?
I suggest that the decline in the 10 Be flux record from the Maunder minimum to the present seen in Fig 11 shows clearly that this record is the most useful proxy for tying solar activity to climate.
Do you have any better suggestion?

Reply to  Dr Norman Page
June 7, 2015 8:41 am

Yes. That there has not been any’decline’ the past 300 years. Here is the 14C record as determined by Raimond Muscheler
http://www.leif.org/Comparison-GSN-14C-Modulation.png

Reply to  lsvalgaard
June 7, 2015 8:49 am

Cycles are present everywhere

Reply to  lsvalgaard
June 7, 2015 11:28 am

Leif My comment referred to the 10 Be flux in Fig 11 of the post. However the red line on your graph indicates the same increase in solar magnetic field strength from 1690 to 1990 as the 10 Be data of Fig 11 The decline from that 1990 peak since then since I reasonably suggest as possibly marking that peak as a peak in the millennial cycle.
For how long and how far do you think this decline might continue.? If it does do you think this will be reflected by a cooling climate.

Reply to  Dr Norman Page
June 7, 2015 11:31 am

There is no change in solar activity since 1700, so no decline to continue and no recent peak.

Reply to  lsvalgaard
June 7, 2015 1:17 pm

Leif .At 1700 the red line is reading about 80 mev at 1990 it reads about 800. The GSN at about 1695 is about 25, at 1990 about 80+
Look at the trends in both curves since 1990.Your view that there was no increase in solar activity from 1690 – 1990 and no decline since then is simply incomprehensible based on the graph you posted.

Reply to  Dr Norman Page
June 7, 2015 1:26 pm

Well, some people have a comprehension deficit. Solar activity now is on par with what is was in 1700, 1800, and 1900. No trend at all. Is that so hard to comprehend? If you think so, what more is there to say?

Reply to  Dr Norman Page
June 7, 2015 1:33 pm

Here is our best estimate of solar activity the past 406 years:
http://www.leif.org/research/Fig-35-Estimate-of-Group-Number.png

ferdberple
Reply to  lsvalgaard
June 7, 2015 7:44 pm

Cyclomania is worse than computer models…
===============
very very few observed physical processes are not cyclical. there is a very simple reason for this. any process that is not cyclical will be finite in duration, and our chance of being alive at that same time to observe it is vanishingly small.
very few things in life are so simple they can be predicted successfully from first principles. thus the IPCC calls climate models projections. we can’t even predict the ocean tides from first principles.
if one truly wants to predict the future for complex systems, one has no alternative but to first discover the underlying cycles in the observed process. since cycles repeat, this provides reliable prediction for the future. this is how we solved the ocean tides.
the term “Cyclomania” is a logical fallacy. You are trying to discredit a technique that has worked repeatedly throughout history by the use of name calling.

Reply to  ferdberple
June 7, 2015 7:55 pm

If a ‘cycle’ repeats millions of times, it becomes a law. If it only shows up a few times, but is believed to be universally valid, it is speculation and not science unless a physical explanation can be given for why the cycle should be valid.

VikingExplorer
Reply to  Dr Norman Page
June 8, 2015 4:59 pm

>> Computer modelling is inherently of no value for predicting future global temperature with any calculable certainty because of the difficulty of specifying the initial conditions
You’re assuming that climatology is an extension of meteorology.
I just looked up the climate of NYC and found this: The daily mean temperature in January, the area’s coldest month, is 32.6 °F (0.3 °C)
This is really not dependent on any atmospheric / weather related initial conditions. The climate is properly defined as a 10 year global temperature average (preferably of the oceans), which is the result of forces external to the land/sea/air system. This would change the nature of the problem significantly.

June 7, 2015 6:54 am

Great expectations –
If one inputs the correct, proper parameters and applies the correct, proper physics/math, then one will always get what they expected: “The Correct Answer”.
Expecting anything other than the correct answer is the fault of the person.
In the case of the Climate Models, they admit they aren’t including all of the proper parameters and they admit that they can’t apply all of the proper physics/math. However, since one parameter they include is “increase CO2 and temperature increases”, the result is what they expect: “An Incorrect Answer That Confirms Their Belief”.
In Lief’s discussions above, I believe Lief is often describing scientists who are really trying to get “The Correct Answer”. Unfortunately, many of today’s “climate scientists” are not the folks Lief describes.

June 7, 2015 6:56 am

I still remember what Professor Portman said in my first synoptic meteorology lab at the University of Michigan in the late 1970’s. He stated something like, “It would take a meteorologist much of their life, working 24 hours a day to solve all the equations from one run of our weather models that a computer does in around an hour!”

KaiserDerden
Reply to  Mike Maguire
June 7, 2015 7:20 am

actually it can be done in seconds with a magic eight ball … and it will be just as accurate …

June 7, 2015 7:06 am

I tend to agree and disagree with author. I have some experience running very complex models which use grids, time steps, and all sorts of equations to represent what happens in each cell, how it “communicates” with adjacent cells, and so on. When these models are developed there is no intent to arrive a specific result, simply because they are generic. So I don’t agree the model developer has a given answer in mind.
The model user takes the generic model product, and introduces values. These can be the grid cell numbers, their description, the way physical and chemical phenomena work, descriptions of fluid behavior, and the initial state (the model has to start at some point in time with some sort of initial conditions). It’s common practice to perform a history match “to get the model to work”. The history match is supposed to use prior history. This means that indeed the model is supposed to meet what’s expected: it’s supposed to match what happened before. But things get weird when the model is run forward into the future. This is where I agree with the author.
If the model predicts something we think isn’t real, we throw those results away. So there is indeed a lot of judgement involved. And if one is expected to deliver a “reasonable” result then the runs we keep lie within a boundary. This is why we try to test the model performance using analogues. We find a test case with “an end of history” where the process went very far, run the model to see how it does. Climate modelers lack analogues. The planet has been around for a long time, but we didn’t measure it very well. That’s it.

June 7, 2015 7:30 am

Al gore,s global warming propagandist was asked why we should believe climate models..his response was they have a total of TEN inputs so they must be right. Gipo

daved46
June 7, 2015 7:34 am

I’m surprised that AFAIK nobody here has mention Kurt Godel and his disproof of the completeness of mathematical systems. Back at the beginning of the previous century, Russell and Whitehead published a book based on the assumption that using a few simple equations/assumptions, all of mathematics could be derived. Godel published a paper which showed that any fixed system contained true statements which were not derivable from within the system. The title of the paper had a “I” at the end to indicate that other papers would follow, but he never had to publish them since the first one convinced everyone that the Russell and Whitehead assumption was wrong. (see Hofstater’s “Godel, Escher, Bach” for much more)
Dave Dardinger

Mr and Mrs David Hume
Reply to  daved46
June 7, 2015 10:15 am

Indeed – we believe that Dave Dardinger has made a very helpful point. We write as mathematical ignoramus but we had understood that Turing had shown (the “Halting Problem”) that not all numbers are computable and that it is not possible to programme a computer for everything. This appears to us to be a consequence of the truth of Goedel’s theorem; but it is a profound and disturbing result – and stands, I believe, unchallenged 80 years later.
So many comments on this list appear not to acknowledge this limitation of mathematics (and therefore of computers) that we wonder if we are wrong and some mathematical advance has been made of which Dave Dardinger and we are ignorant.
I cannot speak for Dave Dardinger but my spouse and I would be grateful to be put right.

ferdberple
Reply to  Mr and Mrs David Hume
June 7, 2015 7:57 pm

you are correct. consider the number 1/3. write this out as a decimal number, you get 0.33333…. At some point in time you have to cut off the 3’s, which leads to errors in your results. The more calculations you do, the more errors accumulate.
computers are base 2, so they have problems with different fractions than humans do with base 10. however, the problems are the same. So, we make computers with longer and longer word lengths. 8 bit, 16 bit, 32 bit, 64 bit, 80 bit floating point, etc. etc, to try and minimize this truncation problem.
so, this makes the problems accumulate a little bit slower, but is doesn’t get rid of them. And why is this a problem? because of chaos and poorly behaved non linear equations. basically these are problems where the range of possible answers diverge, these very small errors quickly swamp the result. You get results like avg 10 +- 100.

June 7, 2015 7:35 am

“a prediction that computers will soon exceed human intelligence,”
That will be a scary day when that happens, and those more intelligent logical emotionless beings we created realize that humans are irrational illogical emotional creatures who are getting in the way of their advancement..

Curious George
Reply to  J. Richard Wakefield
June 7, 2015 8:04 am

It has already happened for many humans.

TRBixler
June 7, 2015 7:49 am

A minor comment from a programmer with over 50 years experience in real time software. I design and write software that works on principles. If they are correct the software works, if they are not the software fails. But the structure of that software is very important for as the failures occur and they will, the software must be allowed to grow to resolve the failures. Anyone that thinks they understand a software system well enough to think that it is perfect must be working on a very small problem. The earths climate is real time and it is not small by any measure. As I write this the Adobe Flash plugin has crashed once again and I am typing on a single computer totally dedicated to serving me. Simple task and yet failure. The Climate models have no real time observers to see when they have failed and no prescribed method to resolve the failures. The physics is daunting and the programming is daunting. How can one believe in any of these models when the static reporting of the global temperatures is such a mess “Harry read me”.

June 7, 2015 7:50 am

Wonderfully put!

Johan
June 7, 2015 7:50 am

What climate models have open source code?
How many climate scientists are also really great programmers?
How many models are able to produce accurate historical values for say the previous 50 years?
And one thing, it’s VERY easy to create a program that gives unexpected results, it’s the norm to be honest. I don’t think I’ve to date since I started programming some 35 years ago have created a bug free first try unless it was something very trivial.

June 7, 2015 8:02 am

“…Computer models
Cannot possibly predict,
The physics not sorted
To allow that edict;
But that is ignored,
You could say it’s denied,
(Isn’t that the term used
If you haven’t complied?)…”
More from: http://rhymeafterrhyme.net/climate-the-one-eyed-politician-is-king/

StefanL
June 7, 2015 8:23 am

Let’s be careful not to over-generalise here.
There many different types of computer applications in the world – ranging from pure calculations to hard real-time interactive control systems. Each type has its own style of specifications, sources of inherent errors, design techniques, technological limitations, validation methods, etc.
The theoretical and practical issues involved in one type are never quite the same as in another.
However I do agree that models of chaotic, non-linear, coupled systems are the most likely to be problematic 🙂

Bruce Cobb
June 7, 2015 8:32 am

Computers and Warmists have a lot in common: they are both tools.

Editor
June 7, 2015 8:47 am

Even after 50 years, computer hardware capabilities are growing exponentially, doubling every 18 months, unlocking a geometric rise in computational power,

This is Moore’s Law. One reason why it’s held true in some arenas is that it became a convenient design goal. Moore’s Law originally referred to the capacity and reliability as advances in integrated circuits made for ever smaller transistors and connectors. It spread to other aspects of computing and in some case drove design, and did quite well there.
In the last decade or two we’ve been running into some fundmental design limits that take either some brute force techniques or entirely different technologies.
Hard disk drive technology had two growth spurts, the second one being driven by the discovery and application of the giant magnetoresistance effect. Recently density growth rates have been nearly flat. http://en.wikipedia.org/wiki/Hard_disk_drive notes

HDD areal density’s long term exponential growth has been similar to a 41% per year Moore’s law rate; the rate was 60–100% per year beginning in the early 1990s and continuing until about 2005, an increase which Gordon Moore (1997) called “flabbergasting” and he speculated that HDDs had “moved at least as fast as the semiconductor complexity.” However, the rate decreased dramatically around 2006 and, during 2011–2014, growth was in the annual range of 5–10%

Graphically (CAGR is Compound Annual Growth Rate):
http://upload.wikimedia.org/wikipedia/commons/b/b9/Full_History_Disk_Areal_Density_Trend.png
Intel architecture processor speeds have been pretty much flat for several years. Performance increases focus on multicore chips and massive parallelism.
http://i.imgur.com/NFysh.png
All told, the biggest supercomputers in the world are doing pretty well and have maintained a steady growth rate, though there are signs of a slowdown in the last few years. The #1 system has over 3 million compute cores! http://top500.org/statistics/perfdevel/

Samuel C Cogar
Reply to  Ric Werme
June 8, 2015 8:06 am

though there are signs of a slowdown in the last few years.

And me thinks that slowdown will likely continue with the primary reason being “program code execution time”.
Most all computer programs are being “written” via use of a high level programming language which has to be compiled, assemble and/or converted to “machine language” which, ….. if it is a medium to large “source” of said “high level programming language”, ….. it will result in, after being compiled/assembled, ….. millions n’ millions of bytes/words of “machine language” code that the per se Intel processor chip has to retrieve and act upon in order to process the data in question.
Like someone once said, ……. iffen you have to run around the barn three (3) times before you can get in the door …… then you are wasting a lot of “run” time.

Reply to  Ric Werme
June 12, 2015 5:52 am

As I mentioned below, the slow down was due to limits to air cooling, of about 100Watt’s per sq inch of chip size. Higher clock speeds were possible the last half dozen years, but they ran too hot. The latest 22nm process has reduced the power per clock cycle some, allowing clock speeds to cross the 4Ghz line and still be able to be air cooled.
I was also at Cray while they were assembling one of the upgrades to Redstorm, way cool.

VikingExplorer
Reply to  micro6500
June 12, 2015 6:22 am

micro, running too hot and costing way too much (6B investment required) with too little ROI is exactly why Moore’s law is dead. Moore’s “law” wishful thinking makes a claim about physics and economics, so it can’t request an exemption from the laws of physics and economics. Technology will still advance, but it will be more expensive.

Reply to  VikingExplorer
June 12, 2015 6:47 am

micro, running too hot and costing way too much (6B investment required) with too little ROI is exactly why Moore’s law is dead. Moore’s “law” wishful thinking makes a claim about physics and economics, so it can’t request an exemption from the laws of physics and economics. Technology will still advance, but it will be more expensive.

Not necessarily, every time they shrink the process they get more die per wafer. It was a long time ago, but I spent 3 years in an IC failure and yield analysis lab. So they have a couple options, they can put more cores in the same area, or they can get more die per lot (they’ve done both). More faster cores sell for more, smaller faster cpus are cheaper to make.
All of the steps in wafer size as well as minimum feature size have all been very expensive, IBM paid the bulk of the NRE for the step from 4″ to 6″, I’m not sure if Intel drove the 6″ to 8″ or IBM, but the steps past that were funded by Intel, and they could do it because of all of the Intel based computers that have been sold (like your smart phone or table, GByte memory, thank the often hated WinTel).
What’s really interesting is that people like Sun came out with RISC to get higher clock speed/lower latency with smaller execution units, and then built the higher level functions in code, with the intention of out performing the CISC computers guys like Intel , but Intel sold so many more chips, they could afford to shrink the die and expand the wafer size, now they have higher clock with CISC cores as compared to the few RISC chips still on the market (they’re almost all gone) (or merged both concepts together as needed).
But as long as they keep selling so many chips, they will IMO keep doing the same thing that made them the giant they are.
Sure there is some limit, and it’s coming, but as I said before they said the same thing in 1983.

VikingExplorer
Reply to  micro6500
June 12, 2015 7:07 am

If you admit that there are limits to physics and economics, then we agree.
As for whether we’re there yet, I would think a graph showing that it has already completely flattened out would be proof.
Computers are already too powerful for the everyday user. My son built an awesome computer for me. The problem is that even under heavy development use, the CPU is mostly idle. The average user only needs word processing and email. That’s why mobile devices rule. There is no economic demand for more power. Even now, mobile devices are usually powerful enough. When my wife got the iPhone 5s, which at the time was the fastest smart phone on the market, she didn’t notice a difference. For what she used it for, the iPhone 4 was fast enough, but not nice enough.

Reply to  VikingExplorer
June 12, 2015 7:38 am

If you admit that there are limits to physics and economics, then we agree.

Of course there will be physic and economic limits.

As for whether we’re there yet, I would think a graph showing that it has already completely flattened out would be proof.

I don’t.

Computers are already too powerful for the everyday user. My son built an awesome computer for me. The problem is that even under heavy development use, the CPU is mostly idle. The average user only needs word processing and email. That’s why mobile devices rule. There is no economic demand for more power. Even now, mobile devices are usually powerful enough. When my wife got the iPhone 5s, which at the time was the fastest smart phone on the market, she didn’t notice a difference. For what she used it for, the iPhone 4 was fast enough, but not nice enough.

But not for gamers, not for what I do for work. And there will be new applications that will require more performance. 60 some years ago IBM thought we’d only need 5 computers, 30 years ago who would have thought we all needed a super computer to carry around, and yet our smartphones have the power of a supercomputer.
Realtime 3d visualization needs more power, data analytics needs more power, ai needs more power. I don’t know what applications will come along that takes advantage of more power, autonomous cars for example, things we don’t do because we don’t have the power, things we haven’t thought of yet.
Hey, I have a related (to climate) project I think one of your skills could help me out with, if you’re interested, you should be able to follow my user name and find an email link, if you’re interested email your email address.

VikingExplorer
Reply to  micro6500
June 12, 2015 8:32 am

>> But not for gamers, not for what I do for work.
There will always be high power usage, but they are also willing to pay more money. Hence, that violates Moore’s conjecture. Without associated increases in productivity or value to end users, the market will decide that because of Rock’s law, investment in the next generation isn’t worth it. That’s where we are now.
>> Realtime 3d visualization needs more power, data analytics needs more power
Yes, but since Netflix already runs well on existing platforms, the public has all the 3D visualization it needs. Gamers gravitate towards bigger screens. My kids play games with an Xbox One and a 9 foot high res screen. They aren’t likely to be interested in playing a game on a small mobile device.
Since high power usage are not demanding small size, the economic motivation isn’t there. Moore said it would double, but that depends on economics.
It’s not always better to have more transistors.

Reply to  VikingExplorer
June 12, 2015 9:01 am

There will always be high power usage, but they are also willing to pay more money. Hence, that violates Moore’s conjecture.

An observation made by Intel co-founder Gordon Moore in 1965. He noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention. Moore’s law predicts that this trend will continue into the foreseeable future.

Nothing in there about economy, and again 20 years ago we didn’t have a market for all of the computers we have now, they developed in part due to performance and cost. And doubling transistors per area as I just mentioned has an economic advantage.

Yes, but since Netflix already runs well on existing platforms, the public has all the 3D visualization it needs. …..
Since high power usage are not demanding small size, the economic motivation isn’t there. Moore said it would double, but that depends on economics.

No, you have no idea about the size of clouds computer centers, the cost of electricity just for cooling, there’s plenty of economy there.
And almost all of the 3d is pre-rendered, that’s why we still have multi $1,000 video cards. Imagine a pair of stereo camera’s that render a scene in 3D polygons. There are needs with economic value, and then there’s the whole security layer that has to be rebuilt (with quantum devices?), because obviously what we’re doing now isn’t going to cut it, 256-bit encryption can be cracked for minimal money on google cloud.
Maybe it’s 10 more years, maybe it’ll be another 50 and we can’t even imagine what these system will be doing by then. rgb says we need 300k x for gcm’s. I just know it ain’t over yet.

VikingExplorer
Reply to  micro6500
June 12, 2015 11:15 am

micro, you apparently didn’t read my link. When you say “Nothing in there about economy”, you’re missing that it’s implicit. New technologies don’t just appear by divine gift. In order for a new generation to appear, a lot of investment MUST take place. In order for that to happen, players must see an economic benefit.
Did you miss the fact that the next gen has been delayed until 2018, 2020, or never? I’m friends with someone who has first hand knowledge of the efforts to create 450 mm, and he explained that it is extremely challenging. To get the right frequency, they need to shoot a laser into a cloud of tin atoms.
450mm delayed till 2023 . . . .sometime . . .never
>> the cost of electricity just for cooling
Right, but denser chips are generating more heat. Server farms don’t use the bleeding edge, since it’s not reliable enough. The fact that it’s delayed is already falsifying the Moore conjecture. Being theoretically possible isn’t enough. Moore predicted that it would happen. If in 2011, they were at 2.5B transistors, they should have been at 5B in 2013, and 10B in 2015. As of 2015, the highest transistor count in a commercially available CPU (in one chip) is at 5.5 billion transistors (Intel’s 18-core Xeon Haswell-EP). They are one generation behind already, and the next one is on hold.
Actually, Moore’s conjecture was official declared over back in February of 2014.

If per transistor costs rise, suddenly higher transistor counts become a liability. It’s still possible to pack more transistors per square millimeter, but that mindset becomes a financial liability — every increase in density increases costs instead of reducing them

Watch the video at the link. The joke is that Intel will go out of business before they admit that Moore’s law is over. We all need to accept reality as it is, without wishful thinking. We’re already 1.5 years past the point when those working in the field have declared it over, so at this point, it’s should be renamed Moore’s delusion.

Reply to  VikingExplorer
June 12, 2015 11:58 am

In general I think the over arching increasing computing power of Moore’s Law is still going to happen. The issues with 450mm and EUV are distractions. Smaller features by themselves does not increase power usage, and you don’t have to have 450mm to reduce the cost of computing. Lastly, and even though it might be more nightmare than dream, I trust that Bill Joy has access to the scientists who have a better view of the path forward.
http://archive.wired.com/wired/archive/8.04/joy.html?pg=1&topic=&topic_set=

VikingExplorer
Reply to  micro6500
June 12, 2015 12:17 pm

Micro,
Did you really just give me a link from the year 2000? That’s 15 years ago. You need to pick up the pace my friend, and in the world of technology, opinions need to be updated every 15 months, not years.
I really can’t explain how you think something that falsifies Moore’s law is a distraction from whether it’s true or not.
As for performance, that’s also fallen by the wayside. Here you can see that clock speed and performance/clock have been flat since about 2003. Notice that it’s very difficult to find a graph that goes beyond 2011?

VikingExplorer
Reply to  micro6500
June 12, 2015 12:24 pm

micro, now that I’ve read the article more thoroughly, I’m taken aback. It’s blatantly anti-technology and delusional. If that’s what you’re about, I’m surprised, but at least you’re in the right place. There are many around here that share the anti-science, anti-technology Luddite philosophy. Bill Joy is a dangerous idiot.

Reply to  VikingExplorer
June 12, 2015 12:55 pm

First things first.
I think this answers performance https://www.cpubenchmark.net/high_end_cpus.html it’s still going up, not cheap, but these are very fast computer, our design server’s 2MB memory board in 1984 was $20,000. So a couple thousand for a
No, I’m not a Luddite, not even close. If you haven’t already, you need to read Drexler. I hadn’t finished the article, nor noticed the date. And yet I still believe Moore’s Law will continue, maybe it won’t be silicon, maybe it’ll be carbon nanotubes, and maybe you’ll be right, but for now I’m in the we’ve got at least another decade of better performance coming.

Reply to  VikingExplorer
June 12, 2015 1:02 pm

Bill Joy is a dangerous idiot.

I’m not sure dangerous, but I’m a bit sadder, I spent a lot of years being impressed with him and the work at Sun. Too much left coast koolaid.

Reply to  micro6500
June 12, 2015 1:16 pm

VE, here QkslvrZ@gmail.com
Why don’t you send me your email, like I said I have some stuff we can do that you might be interested in, but I avoid discussing work here(I’m not in the Semi Industry anymore, but..) .

VikingExplorer
Reply to  micro6500
June 12, 2015 1:18 pm

Well, you can’t shift positions like and not admit you were wrong. It’s a straw man, since I never disputed that performance would continue to rise. It was a very specific objection to Moore’s delusion, which even though Moore himself never said it would continue, has a large group of zealots who act like it will continue indefinitely, even though logic and common sense falsify it.
I think you are wrong about better performance as well. Now that the claim is disconnected from chip density, other means of improving performance are available. I think it’s reasonable to assume that performance will continue to improve, quickly or slowly, indefinitely.

David L.
June 7, 2015 8:48 am

At the heart of the program is the climate model, and that’s the problem. Model’s that interpolate wishin known bounds are much more reliable than models that extrapolate outside known bounds. The ideal gas law is an excellent example. Within bounds of temperature, pressure, volume, and moles, the model works fine. Outside certain ranges it deviates from reality and requires “fudge factors”. These cannot be determined from “first principles” but rather empirically.

Gamecock
June 7, 2015 8:54 am

I just wish climate modelers would publish a verbal description of what their model is/how it is supposed to work. “Model” has become too nebulous.
I hear that GCMs use gridded atmosphere. But I never hear any discussion of what interactions are programmed, what influences what, what original values are used, quantitative relationships, etc.
I want a model of the model. Code makes a wonderful barrier to knowledge of what is being done. We look at model results, and declare, “That can’t be!” We need to be able to focus our analysis on the design phase. We will likely be shocked by what has been left out.

John
June 7, 2015 9:01 am

And that’s why I often say I can write a program to give you the results you want regardless of the input.

Tim Ball
June 7, 2015 9:10 am

Climate models are different because they are Gospel In Gospel Out. The Garbage is in their Bible.

terrence
Reply to  Tim Ball
June 7, 2015 3:09 pm

well said Dr Ball – I had not heard that one before – Gospel in Gospel out, THANK YOU

Leonard Weinstein
June 7, 2015 9:14 am

“all computer models are wrong, but some are useful”. Unfortunately, climate models so far have not been shown to be useful, and do not appear to show any promise. Please send more money.

Mr and Mrs David Hume
June 7, 2015 9:20 am

We are lost. Surely a computer only tells us something that we do not expect only in the sense, say, that I have no more than a general idea of what to expect (being no mathematician of any kind) if I divide any very large number by another large number.
Manipulating the numbers according to the rules that I was taught at primary school (providing that I apply them accurately) will give me an answer that, unless I was an Indian arithmetical genius (who, however, is applying the same primary school rules) would otherwise be unknown to me. A pocket calculator would help in applying the rules – as would a computer – but neither of them would give me a different or superior answer – it would just produce it almost as quickly as the Indian arithmetical genius..
The relationship of the original numbers to any real world problem will depend on empirical verification. The primary school rules, the pocket calculator and even the Met Office’s computer will be of no help there.
How is modelling different ? Is there something that, all these years, as humble scribes in the world of business and administration, we have failed to understand about computers ?

Mr and Mrs David Hume
Reply to  Mr and Mrs David Hume
June 7, 2015 9:23 am

For the administrator only. Got my e-mail address wrong. Corrected below.

Samuel C Cogar
Reply to  Mr and Mrs David Hume
June 8, 2015 8:36 am

David Hume et el
Maybe you should rephrase your question to state, to wit:
Is there something that, [after] all these years, as humble scribes in the world of business and administration, we have failed to understand about the IRS Income Tax Laws?
Its not the computer, nor is it the IRS, …… but its how you “apply the rules” that count.

JP
June 7, 2015 9:21 am

One of the hottest growing fields on Wall St circa 1990-2007 was the field of econometrics. Due to the rapid fall of the price of hardware (compute, memory, and storage) firms could afford to build sophisticated risk models that didn’t rely on expensive mainframes. All of a sudden there was a very large demand for economists with mathematical and software development skills. Nerdy men and women with advanced degrees did everything from building risk models for mortgage derivatives to Credit Default Swaps. And what was the result? One of the largest stock market crashes in human history. Yes, there were a few savvy investors like John Paulson, using his own risk models, was able to make one of the biggest short-sells in recent history (he shorted corporate bonds that were backed by Credit Default Swaps). But, over-all I believe Paulson used his own experience and intuition in deciding the timing of the short-sell. If the CDSs and CDOs would have collapsed at a later time, Paulson would have lost his entire portfolio of several hundred million. But Paulson nailed it and made $3 billion on the bet. The point is that most risk models still projected healthy returns on CDSs and Synthentic CDOs right into 2008. Of course, people could argue that Goldman Sachs modelers knew better (see Goldman’s Abacus Scandal).
Climate models like risk models rely on sophisticated analytics. But, like forecasting risk in the securities market, climate models do not live up to their promise. And we shouldn’t confuse forecast models and climate models. The improvement in forecast models during the last 40 years has been phenomenal. For some reason people who should know better confuse the 2. As a matter of fact, as climate models fail to project global temperature patterns, Alarmists have used weather events (heavy rains in Texas, blizzards on the East Coast) to verify their projections. Currently, they are forced to make fools of themselves as they use the absurdity of “extreme weather” to say they were right all along – all when global temperatures haven’t budged in 20 years.

Gamecock
Reply to  JP
June 7, 2015 10:52 am

“Climate models like risk models rely on sophisticated analytics.”
Their analytics are unpublished. How would you know if they were sophisticated?

JP
Reply to  Gamecock
June 7, 2015 1:06 pm

Unpublished? Good grief, you’d have a difficult time in not finding published work in quantitative financing, financial modeling, single equation econometrics, labor, price, casualty, and risk modeling. Where do you think the econometricians come from? Universities. Both Ross McKitrick and Steven McIntyre come out of this field. Both have at least Masters in advanced numerical analysis. The Federal Reserve employs a large number of economists who have Masters and PHDs in modeling. But the vast majority of these specialists go to large Wall St firms. And they bring with them ideas, and theories well published in the world of advanced economics.

Gamecock
Reply to  Gamecock
June 7, 2015 3:40 pm

Fine, JP. Direct me to a published climate GCM model.

Gamecock
Reply to  Gamecock
June 7, 2015 5:03 pm

JP, your declaration of orthodoxy is not evidence of sophistication. It is evidence of sophistry.

Jquip
June 7, 2015 9:40 am

lsvalgaard:

I disagree. The computer [more precisely its software] can make visible that which we cannot see. A couple of examples will suffice. The computer can tell us when to fire the rocket engines and for how long in order to arrive at Pluto ten years hence. The computer can, given the mass, the composition of a star, and nuclear reaction rates tell us how many neutrinos of what energy will be produced. The results are not ‘built in’ in the sense that the computer just regurgitates what we put in.

You’re talking about different cases here. In the Pluto case we are already assuming the path, that is the physical end goal in reality, and the computations are just a satisfiability problem about the parameters of the rocket necessary to satisfy that goal. It is, in all respects, a curve fitting exercise with a strong preconception of reaching Pluto.
That we accept the validity of this computational exercise isn’t really germane. We accept it because we engineer with it. That is, we test the model against reality. A lot. But if we did not test the model, or had no way to do so, then what is the worth of the model? If you cannot engineer with it, you cannot test it, and so you cannot state what it is that’s even going to occur in reality.
Your other example is open-ended. That is, you set up your simulation and look for the emergent answer. There is no end goal that you’re wed to, and it is not a satisfiability or a preconception problem for the outcome. You just put in your priors and see what falls out the other side. If you have accurately put in all the relevant bits about neutrino production from theory then you receive an unbounded number. This you can test as you like assuming we can engineer a neutrino detector.
Both considerations avoid dealing with a singular problem with computers however. A very popular misconception, but a deep one: Computers do not perform mathematics. They simulate them. To get mathematics done properly in a computer you simply cannot make direct and uncritical use of the built-in cpu simulators for integer and real numbers. You need to produce an entire tapestry of software that deals with things in symbolic fashion where possible, produce on demand arbitrary/infinite precision numeric software libraries, or a whole host of extra computation to deal with inaccuracies that creep in by consequence of making uncritical and blind use of the hardware internal math simulators.
And this is a huge problem. For if we are dealing with math, we simply publish our paper with the relevant mathematical hieroglyphics and know that everyone else can sort it out. But if we run any calculation on a computer, we must first put those hieroglyphics into a form that the computer can savvy and simulate. But that also means that the software code is not math — properly understood — and that any differences in the hardware math simulators may cause divergent results that are inscrutable from the point of view of the hand-written hieroglyphics.
And due to the nature of these math simulators, it is entirely possible to accurately encode the same exact math in two different manners and receive wildly different results from the machine. And yet, as a point of hand-written math, the results should be identical. This errant behaviour can be accomplished by something as simple as multiplying before subtracting and vice versa.
By consequence, it is not enough to simply state ‘computers are just big calculators.’ If they are used to generate results, then we need to know that their outcome is legit. Regardless of what math the scientist thought me might be doing, the computer will do exactly what it is told — even if the scientist isn’t aware that he told it to do so. If a paper does not include a full spec of the software code, software library versions, compiler versions and hardware used, then the results cannot be guaranteed to be replicable.
And we are disbarred from then stating ‘this is the math’ unless we are minimally shown these things. Or unless all the math is simulated by a known software framework that is well vetted for correctness in these domains.

Reply to  Jquip
June 7, 2015 9:44 am

The program was written for the case of reaching Pluto, but is general purpose and will work with any goal [Mars, Jupiter, etc], so the answer was not ‘built in’.

tty
Reply to  lsvalgaard
June 7, 2015 10:51 am

And when You get right down to it, that program is really based on a single equation:
F=G x m1 x m2/ r^2
Now, I don’t dispute that modern orbit calculation programs are quite marvellous, since they can handle (by numerical approximations) the gravitational interaction of a large number of bodies, but the underlying physics is actually rather simple, certainly in comparison to the Earth’s climate system.

Reply to  tty
June 7, 2015 11:03 am

There are also relativistic and tidal effects, so it is a bit more complicated than just Newtonian. But the issue is still the same: is the model deliberately misconstructed such as give the answer one wants, and I will maintain it is not. Some conspiracy nuts might disagree at their peril.

VikingExplorer
Reply to  lsvalgaard
June 8, 2015 5:19 pm

tty, except we don’t determine any of those values very well when the number of bodies exceeds 3. The sun is losing mass, orbits are elliptical, orbits decay, etc. The problem is not easy. However, people aren’t giving up and declaring the problem unsolvable.

graphicconception
Reply to  lsvalgaard
June 8, 2015 5:54 pm

“… tell us how many neutrinos of what energy will be produced.” How do you know it gave you the right answer? Can you work it out without the computer or were you guessing?

Reply to  graphicconception
June 8, 2015 7:37 pm

I know the physics and have measured the nuclear reaction rates in the laboratory so can easily compute the number of neutrinos to expect. And when we measure that number, we find a very good match.
To give you a taste of a similar calculation: can I calculate how much helium was generated in the Big Bang?
Yes I can, and you can even understand the calculation. Here it is:
http://www.leif.org/research/Helium.pdf

catweazle666
Reply to  lsvalgaard
June 9, 2015 7:08 am

“To give you a taste of a similar calculation: can I calculate how much helium was generated in the Big Bang?
Yes I can, and you can even understand the calculation. Here it is:
http://www.leif.org/research/Helium.pdf

Very impressive, Professor.
Tell me, have you calculated how many angels can dance on the head of a pin yet?

Reply to  catweazle666
June 9, 2015 7:16 am

It depends on the size of the pin. Please give me your best unbiased estimate of pin size. Actual data, no models, please.

Reply to  Jquip
June 7, 2015 9:50 am

The programs show their replicability (is there such a word? – Now there is) by working consistently with other input and goals [e.g. going to Mars instead].All the things you mention would be nice, but never happen in real life.

Reply to  lsvalgaard
June 7, 2015 10:19 am

“All the things you mention would be nice, but never happen in real life.”
Now who is being nihilistic? 🙂

Jquip
Reply to  Jquip
June 7, 2015 10:41 am

The program was written for the case of reaching Pluto, but is general purpose and will work with any goal [Mars, Jupiter, etc], so the answer was not ‘built in’.

The positions of the start and end are expressly encoded. As otherwise we would enter the physics of a rocket engine, run a simulation, and the rocket would end up in some random place. Such simulations are, in every case, curve fitting or satisfiability problems. The end state is always encoded necessarily.

(is there such a word? – Now there is

http://lmgtfy.com/?q=definition+of+replicability

All the things you mention would be nice, but never happen in real life.

We are absolutely agreed here. The current practice of science is to not only fail to produce replicable experiments, it is to fail to produce replicable math. Any disagreement between the two of us would seem to be over whether to call this real life happening ‘science’ or not. Or, whatever we choose to call it, whether or not it should be given more credibility than personal anecdotes.

Reply to  Jquip
June 7, 2015 10:51 am

The positions of the start and end are expressly encoded
No, the code takes these as input.

Jquip
Reply to  Jquip
June 7, 2015 11:01 am

No, the code takes these as input.

Of course, that’s the point everyone else is on about. It seems we were in a contest of vigorous agreement.

William C Rostron
Reply to  Jquip
June 8, 2015 6:51 am

jquip, you write:
” But that also means that the software code is not math — properly understood — and that any differences in the hardware math simulators may cause divergent results that are inscrutable from the point of view of the hand-written hieroglyphics.”
This statement is not correct. What computers do at the most fundamental level is most certainly mathematics; it’s just that that mathematics is a subset of the greater field of mathematics. Computers are based on Boolean Algrebra; the internal operation of any computer is as mathematical as anything could possibly be.
This fact has tremendous implications in the patent world, because software, per se, is mathematics, and mathematics is excluded from patentability because it is a fundamental tool like language or other natural phenomena. What is patentable is not the mathematics itself, but the results of that computation which may well be a unique invention. (I am not a patent lawyer)
It is true that computers cannot render certain mathematical concepts in real form. For example, irrational numbers cannot be exactly rendered because of the discrete coding of numerical values in the computer.
Also, Boolean Algebra (it’s an algebra subset) is inherently incapable of rendering certain higher concepts of mathematics because of the discrete nature of logic. However, any mathematical concept which relies on logic for determination can be rendered perfectly in Boolean Algebra, including the rules of algebra itself. The Maple symbolic engine comes to mind. So the algebraic rules for the manipulation of symbols can be perfectly rendered on a computer, just as those symbols are manipulated on a sheet of paper by a skilled mathematician.
There is an inescapable difference between computation by logic and the real world. The real world appears to be not logical at its most fundamental level – the level of quantum mechanics. The motion of atoms in a substance cannot be exacly determined, though that motion might be constrained to a high degree. Computation in Boolean Algebra, in constrast, is always exactly determined. From first principles it is not possible to render quantum phenomenon in logic; they are different worlds.
Mathematics itself is not a quantum phenomenon, but discovery of the rules of mathematics may well be. Godel;s incompleteness theorum implies that the rules of mathematics cannot be derived from first principles; rather, they must be discovered.
-BillR

June 7, 2015 10:09 am

LeifV said-
“But we have to keep trying, and eventually we may get it right. It is nihilistic to think that just because it doesn’t work today, it will never work. At some point in time it will work well enough to be useful. ”
I haven’t read all the comments yet, so someone else might have addressed this, but when I read that I asked “But the climate modelers/AGW people think they ARE getting it right ALREADY. Many of them think they DO WORK TODAY. And most, if not all of them think they work well enough to be useful NOW. So why would they continue to try to CHANGE things, adjust, modify, or “learn” anything new or outside of their current understanding?”
And another question Leif-what if they currently had a model that produces EXACTLY, or close to it, the actual climate conditions we are experiencing today? Would any rational, intelligent person believe that they’ve actually, correctly modeled Earth’s climate? No, a rational, intelligent person would have to WAIT and continue to run that model for years, decades, all the while comparing it to the current climate before they’d even dare to believe that the model MIGHT actually be correct, rather than just matching up for a short period of time for some random reason, and that there simply cannot be another outside factor left that might cause it to diverge from reality in the future. But I would bet my life, yes, my actual life, that they would announce to the world IMMEDIATELY that they can now model the climate perfectly and accurately and parade the mantra that “The science really is settled now”. And millions of people would believe them!
It”s not nihilism to think that. It’s REALISM based upon evidence of human behavior and hubris spread over thousands of years!
And Eric-this is the priceless statement for me-
“They might as well say they wrote their opinion into a MS Word document, and printed it – here is the proof see, its printed on a piece of paper…”
Exactly! But sadly, if that paper has NASA, NOAA, White House, or hell, even a college university letterhead at the top-then it IS actual “Science” to sooooooooooooo many people! How stupid can people be?

Reply to  Aphan
June 7, 2015 10:18 am

Well, there are well-established statistical measures of how long you have to wait before declaring that the model works to a specified accuracy or ‘believability’. Rational persons would use such a measure. An analog would be to think about quality control. You select a small number of items of a product and check if they are OK. It is too expensive to check ALL the items, so the question becomes “how many should I check to be assured of a given degree of ‘goodness'”. This is not rocket science and takes place every day in countless enterprises.

Jquip
Reply to  lsvalgaard
June 7, 2015 10:47 am

Well sure, statistics is rather well formulated even if its uses fall short quite often. And the difference in use can be remarkably subtle. An easy to understand example of this difficulty in statistical modelling is demonstrated by Bertrand’s Paradox.
http://en.wikipedia.org/wiki/Bertrand_paradox_%28probability%29

Reply to  Jquip
June 7, 2015 10:49 am

The quality control procedure works very well regardless of any such subtleties.

Jquip
Reply to  lsvalgaard
June 7, 2015 10:59 am

The quality control procedure works very well regardless of any such subtleties.

Define ‘very well’ — what are the statistical bounds of such a term? Each of the three examples in Bertrand’s Paradox work ‘very well’ depending on what value of ‘very well’ we choose. And yes, there are subtleties in quality control, as there are common and special causes, and quite often a lack of Gaussian distribution. It’s not simply a matter of chucking a bell curve at things and then repeating the Central Limit Theorem as a doxology.

Reply to  Jquip
June 7, 2015 11:08 am

There is a lot of literature about that. Google is your friend.

Jquip
Reply to  lsvalgaard
June 7, 2015 11:50 am

There is a lot of literature about that. Google is your friend.

Indeed there is. But I’m not sure how your opinion of my ignorance is germane to the idea that the statistical literature that I’ve referenced establishes that there are subtleties and that there are special causes that need to be dealt with. Not just in the limited case of product quality control, but as the very point was making about Aphan was making about validating the model against observation and of the special causes — ‘outside factors’ — that could cause divergence from theoretical models.
Name dropping Google here just doesn’t suffice to carry as a rebuttal.

Reply to  Jquip
June 7, 2015 11:52 am

not rebuttal, just guidance.

Jquip
Reply to  Aphan
June 7, 2015 12:45 pm

not rebuttal, just guidance.

My apologies, I thought you disagreed with my point. Just another case of vigorous agreement between us, it seems.

Reply to  Jquip
June 7, 2015 1:22 pm

Perhaps I missed what your point was. Many people have persisted in this thread to point out that I missed their points, so why not yours. My point, in case you missed it, is that without specifically identifying what special circumstance you have in mind, it remains hand waving. Granted that hand waves are the most common wave phenomenon known to man.

Jquip
Reply to  Jquip
June 7, 2015 1:44 pm

My point, in case you missed it, is that without specifically identifying what special circumstance you have in mind, it remains hand waving.

This statement is hard to square with your previous statement — which I will quote here in verbatim:

Well, there are well-established statistical measures of how long you have to wait before declaring that the model works to a specified accuracy or ‘believability’. Rational persons would use such a measure. An analog would be to think about quality control. You select a small number of items of a product and check if they are OK. It is too expensive to check ALL the items, so the question becomes “how many should I check to be assured of a given degree of ‘goodness’”. This is not rocket science and takes place every day in countless enterprises.

One of the subtleties of such a trivially simple statistical model that I mentioned were ‘special causes.’ Given that you apparently missed your own point due to these subtleties, permit me to give you some guidance on basic statistics. And easy to read primer on this specific topic can be found here:
http://en.wikipedia.org/wiki/Common_cause_and_special_cause_(statistics)
Where in the important portion I will quote inline:
Special-cause variation is characterised by:[citation needed]
New, unanticipated, emergent or previously neglected phenomena within the system;
Variation inherently unpredictable, even probabilistically;
Variation outside the historical experience base; and
Evidence of some inherent change in the system or our knowledge of it.
I’ve taken the liberty of bolding to point most relevant to coupled and chaotic systems. But all of them are directly relevant to Aphan’s questions. And you’ll forgive any appearance of a knock on your competence as a scientist, but these basic statistical issues seem to have been to subtle for you personally in this short and light discussion about statistics. And since you are unquestionably top tier in your field, and I would submit within the Scientific disciplines generally, that it can only be fairly concluded that your lesser peers have an astonishing lack of competency in basic statistics, their construction, use, and validation through experimentation and observation.

Reply to  Jquip
June 7, 2015 1:52 pm

I think you are overreaching and extrapolating beyond reason. Astronomers deal routinely with enormous masses of data and have great practice and experience in applying statistics. For example, our solar satellite SDO collects more than a terabyte of data every day. The Kepler satellite was staring at and observing 150,000 stars simultaneously. Galaxy and star surveys deal in the billions. Prediction of solar activity, flares, and interplanetary medium characteristics are routine. If there is something we are good at it is statistics.

Jquip
Reply to  Jquip
June 7, 2015 2:46 pm

If there is something we are good at it is statistics.

Well, if we appeal to your authority, such as we have both done now, then no — it is not obvious. We can state by that, however, that if there is something you and your discipline are — it is being better at statistics than the other disciplines in science. But being ‘not as bad as’ is not synonymous with ‘good.’
And if your statement is that massive volumes of data collection free you from this, then that too is in error. It does not, of course. We can go through the long list of statistical failures in astronomy and astrophysics that were based on shoddy use of statistics and undue care with respect to ‘special causes.’ A foundational consideration in product quality control; a topic you’ve presented familiarity with and declared to be “… not rocket science and takes place every day in countless enterprises.” .
But these are all the very things that Aphan referred to. Those things that are not rocket science, but are apparently beyond men that look at the stars. I submit that this has nothing to do with you, personally, beyond that you are a member of the Scientific disciplines. But there is an ever growing ignorance about Statistics in Science, even as Science relies in greater part on Statistics. This is an absurd outcome.

Phil Cartier
June 7, 2015 10:15 am

To All- I hope all of you will watch this video on youtube by Dr. Christoper Essex https://www.youtube.com/watch?v=19q1i-wAUpY if you haven’t already done so..
The discussion here seems to show that all the usual talking points are showing up.
The article is woefully deficient in that modeling climate on computers is limited in many unexpected ways, as Dr. Essex points out. It is not simply a matter of writing a program biased by the author. Computers and programs are limited in basic other ways that make the problem literally impossible without an improvement of at least 10^20 in computing power.
Some of the other points, in no specific order-
The basic physics has not been solved. The Navier-Stokes equation (viscous fluid flow), like many multi variable partial differential equations, has not been solved but has to be estimated by numerical computer methods.
Computers generally use floating point representation- an 11 binary bit exponent and a 53 bit base or significand, both of which are packed into 64 bit(4 bytes) or even longer, more finely accurate formats. For most things this can be made to work satisfactorily but numbers such as 1/3 or pi cannot be completely accurate since they have an infinite number of places. Climate models all include partial differential equations that have to be estimated, not calculated and every calculation results in a slight error. The limit of the error is called the machine epsilon- the smallest number that can be added to another and give another different number.
When trying to solve equations this way eventually the answer “blows up” because the accumulation of small errors increases with every calculation.
Matters of scale- both water and air are fluids. The flow can be either smooth(laminar) or chaotic(vortices or both). The moving fluid moves energy through flow. As the flow slows down(say a stream rushing into a lake) the vortices gradually break up and the speed slows down until at around 1mm any motion disappears into the Brownian motion of vibrating atoms- a tiny bit more faster or hotter but indistinguishable. So climate calculations need to be done on a millimeter or less scale. How many square millimeters on the earth’s surface or how many cubic ones in the air and ocean are there?
Since climate is weather, in the long term, climate models are essentially weather models with scales way, way larger than already way, way to large. Weather models now are generally fairly good for a few days to a week or so. Climate models are unlikely to be accurate for even a month, except for the fact that the predictions they make have such wide error ranges it’s hard to say exactly when they’ve gone wrong.
Thank you Dr. Essex. You are a much more better mathemetician and physics student

MRW
Reply to  Phil Cartier
June 9, 2015 1:22 pm

This was a brilliant hour to watch. Dr. Essex shows how even if you do put the proper data and formulae in ( the physics), you “can still get garbage out” of a finite computer. He shows it mathematically. (Also see his paper “Numerical Monsters.” Christopher Essex, Matt Davison, and Christian Schulzky.
http://www.math.unm.edu/~vageli/courses/Ma505/I/p16-essex.pdf)

Verified by MonsterInsights