Guest Post by Willis Eschenbach
Eric Worrell posted an interesting article wherein a climate “scientist” says that falsifiability is not an integral part of science … now that’s bizarre madness to me, but here’s what she says:
It turns out that my work now as a climate scientist doesn’t quite gel with the way we typically talk about science and how science works.
…
1. Methods aren’t always necessarily falsifiable
Falsifiability is the idea that an assertion can be shown to be false by an experiment or an observation, and is critical to distinctions between “true science” and “pseudoscience”.
Climate models are important and complex tools for understanding the climate system. Are climate models falsifiable? Are they science? A test of falsifiability requires a model test or climate observation that shows global warming caused by increased human-produced greenhouse gases is untrue. It is difficult to propose a test of climate models in advance that is falsifiable.
Science is complicated – and doesn’t always fit the simplified version we learn as children.
This difficulty doesn’t mean that climate models or climate science are invalid or untrustworthy. Climate models are carefully developed and evaluated based on their ability to accurately reproduce observed climate trends and processes. This is why climatologists have confidence in them as scientific tools, not because of ideas around falsifiability.
For some time now, I’ve said that a computer model is merely a solid incarnation of the beliefs, theories, and misconceptions of the programmers. However, there is a lovely new paper called The Effect of Fossil Fuel Emissions on Sea Level Rise: An Exploratory Study in which I found a curious statement. The paper deserves reading on its own merits, but there was one sentence in it which struck me as a natural extension of what I have been saying, but one which I’d never considered.

The author, Jamal Munshi, who it turns out works at my alma mater about 45 minutes from where I live, first described the findings of other scientists regarding sea level acceleration. He then says:
This work is a critical evaluation of these findings. Three weaknesses in this line of empirical research are noted.
First, the use of climate models interferes with the validity of the empirical test because models are an expression of theory and their use compromises the independence of the empirical test of theory from the theory itself.
Secondly, correlations between cumulative SLR and cumulative emissions do not serve as empirical evidence because correlations between cumulative values of time series data are spurious (Munshi, 2017).
And third, the usually held belief that acceleration in SLR, in and of itself, serves as evidence of its anthropogenic cause is a form of circular reasoning because it assumes that acceleration is unnatural.
Now, each of these is indeed a devastating critique of the state of the science regarding sea level acceleration. However, I was particularly struck by the first one, viz:
… the use of climate models interferes with the validity of the empirical test because models are an expression of theory and their use compromises the independence of the empirical test of theory from the theory itself.
Indeed. The models are an expression of the theory that CO2 causes warming. As a result, they are less than useful in testing that same theory.
Now, the scientist quoted by Eric Worrell above says that scientists believe the models because they “accurately reproduce climate trends and processes”. However, I see very little evidence of that. In the event, they have wildly overestimated the changes in temperature since the start of this century. Yes, they can reproduce the historical record, if you squint at it in the dusk with the light behind it … but that’s because they’ve been evolutionarily trained to do that—the ones that couldn’t reproduce the past died on the cutting room floor. However, for anything else, like say rainfall and temperature at various locations, they perform very poorly.
Finally, I’ve shown that the modeled global temperature output can be emulated to a very high degree of accuracy by a simple lagging and rescaling of the inputs … despite their complexity, their output is a simple function of their input.
So … since:
• we can’t trust the models because their predictions suck, and
• we can emulate their temperature output with a simple function of their input forcing, and
• they are an expression of the CO2 theory so they are less than useful in testing that theory …
… then … just what is it that are they good for?
Yes, I’m aware that all models are wrong, but some models are useful … however, are climate models useful? And if so, just what are these models useful for?
I’ll leave it there for y’all to take forwards. I’m reluctant to say anything further, ’cause I know that every word I write increases the odds that some charming fellow like 1sky1 or Mosh will come along to tell me in very unpleasant terms that I’m doing it wrong because I’m so dumb, and then they will flat-out refuse to demonstrate how to do it right.
Most days that’s not a problem, but it’s after midnight here, the stars are out, and my blood pressure is just fine, so I’ll let someone else have that fun …
My regards to everyone, commenters and lurkers, even 1sky1 and Mosh, I wish you all only the best,
w.
My Usual Request: Misunderstandings start easily and can last forever. I politely request that commenters QUOTE THE EXACT WORDS YOU DISAGREE WITH, so we can all understand your objection.
My Second Request: Please do not stop after merely claiming I’m using the wrong dataset or the wrong method. I may well be wrong, but such observations are not meaningful until you add a link to the proper dataset or an explanation of the right method.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
John von Neumann famously said:
«With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.»
By this, he meant that one should not be impressed when a complex model fits a data set well. With enough parameters, you can fit any data set. It turns out you can literally fit an elephant with four parameters if you allow the parameters to be complex numbers.
Drawing an elephant with four complex parameters by Jurgen Mayer, Khaled Khairy, and Jonathon Howard, Am. J. Phys. 78, 648 (2010), DOI:10.1119/1.3254017.”
http://2.bp.blogspot.com/-CkKUPo04Zw0/VNyeHnv0zuI/AAAAAAAABq8/2BiVrFHTO2Q/s1600/Untitled.jpg
Funnily enough ‘Elephant’ is the nickname here in France for the oldest leftists leaders.
“Elephant” is the GOP’s mascot, too. (Grand Old Party in the US, or the Republicans.)
That’ looks a lot more like a Mastodon than an Elephant. But it seems a lot closer to being an Elephant than climate models do to being credible predictors.
And it is undeniably cute.
“If you put tomfoolery into a computer, nothing comes out of it but tomfoolery. But this tomfoolery, having passed through a very expensive machine, is somehow ennobled and no-one dares criticize it.”
– Pierre Gallois
Mathematics may be compared to a mill of exquisite workmanship, which grinds you stuff of any degree of fineness; but, nevertheless, what you get out depends upon what you put in; and as the grandest mill in the world will not extract wheat-flour from peascod, so pages of formulae will not get a definite result out of loose data. – T. H. Huxley
There has been a noticeable increase in papers which claim that using a model that produces the result they modelled for proves what they were modelling for is right.
I think the authors actually believe this lunacy, partly because they start if with such a strong belief in the result they want, partly because they have been conditioned to not actually think and partly because they do not understand models.
We seem to have reached Peak Lunacy in the AGW world, where the “science” that us being churned out is beyond parody and beyond reason. At the same time, the output of “real” climate science is correspondingly low – has there been any important advances in 5-8 years?
I hope this means we are on the cusp of it all collapsing.
And partly because they don’t read skeptical critiques, only their side’s misrepresentations of them.
It is also indicative of novice or inexperienced programmers.
Folks start out certain their code is perfect and that they are right. It takes sbout 5 years of hair pulling, midnight phone calls, crashes, and bug reports / bug stomping to drive that out of new programmers and let them reach the curmudgeon defensive programmer style that has few bugs.
At the 10 year point or so they even start to suspect the regression tests snd QA suite of being wrong…
(Me? Programmer for about 45 years, ran QA department for a compiler tool chain. Ported GIStemp to Linux in my office. Built a personal Beowulf cluster for fun. Currently running a 3 node, 12 core Raspberry Pi cluster with distcc for distributed compiles, and playing with two climate models – though at low priority. Not at all impressed with the climate models, and GIStemp is flat out junk. And yes, I now write VERY paranoid and VERY careful code and still don’t trust it. Why? Compiler bugs. I’ve actually had code that said, basically, IF A do foo, iIF NOT A do bar, ELSE Print “you can’t get here”: when run, print out “you can’t get here”… So my code is often testing for impossible cases… the climate codes I’ve seen look to be barely tested, often changed, snd lacking a regression suite. A formula for rampant failure.)
Most code isn’t even written to be testable. I have come to believe that testing must begin at the requirements phase, long before code is even written. Otherwise it’s just an exercise in trying to find all the bugs and miss-specifications after the main development team has moved onto something else.
Of course, the GCMs have no formal requirements or tests of any kind, so they are just lab toys and not suitable for basing public policy on.
Yep! Many years ago, I found that a brand new, shiny, FORTRAN 77 compiler for an ICL mainframe introduced bugs when you turned off debug mode. Took a lot of going thrrough hex dumps to find out it was the COMPILER, not me!
This sorta leaves a programmer in an almost permanent paranoid mode.
Dear Willis.
See my long post in the previous article.
Climate science is a metaphysical construct reflecting a deep set sense of fear and guilt over mankind’s presumed power to affect Nature, and his terror at the power he thinks he has being misapplied enough to terminate his existence.
Used by people who don’t care anyway to further their political and commercial ends.
A guy called Cnut tried to remedy this years ago, but just turned into an anagram instead.
Do young people today even know about King Canute?
Great wisdom is embedded in Western Culture. That doesn’t suit the postmodern feminists who run academia because it was all created by old dead white men. These idiot scumbag Marxists have missed the biggest lesson passed on to us by the ancient Greeks, and then Canute, which is about the dangers of hubris.
Many youngsters have heard the story of King Canute, but they heard it told by Marxist indoctrinated teachers, so instead of being a story where the wise king shows the folly of human power over nature to his misguided court, it is transmogrified into a story about the folly of kings with little implication for the impact on society.
It’s funny Canute didn’t show his court a model of the tide turning back and claim even greater God given powers. Especially to tax to prevent the tides overtaking the land.
The story of Canute was presented as a story about the folly of kings long before there were any “Marxist indoctrinated teachers”.
It is not generally known that a law passed during his rein was the basis for the American Revolt’s claim that Parliament did not have the authority to tax them. Proof will be available in my book soon to be published.
I can see climate models as useful tests of collecting together what people THINK they know about climate and letting it run for 10 years. Repeat…..
What I don’t like about the climate model crowd now is that they purposely misrepresent what the models even show. If ONE of your runs was close in 2016, another one was close in 2015, another one in 2014, that doesn’t mean they were based on a solid understanding. They could have just ended up there by coincidence because there are only so many options that the end result can be.
Until individual runs can start getting 8 out of 10 years right, they are not to be relied on for policy. Looking at dozens in a suite and saying they each individually got a certain year right is total garbage.
Not one has gotten even one year “right”. They all miss MAJOR elements of the observed climate system; they just sometimes come closer on temperature than at other times.
So if they “adjust” the input parameters to get a correct projection, does that mean the model is correct? What if they have one input too high a forcing and another too low? Same result! What if they have a hundred too low and 1 big one too low? Same result! What if they don’t like the results and start to “adjust” the inputs that seem to be “holding it back” and then convincing themselves that they’ve stumbled onto the magic formula that “explains” climate and generates new funding? Well! Now they’re cooking with gas!
John,
You said, “… models as useful tests of collecting together what people THINK they know about climate…” Indeed, that should be their only function, to test what we think we know about climate. Unfortunately, climatologists seem to be learning little, and arrogantly claim that their results are reliable and useful. In other words, climate modelers think they know everything!
No it must both get at least nine out of ten years right and be able to show the reason the other was not right.
And there must be no need for any adjustment of the data. Engineers could get data on climate accurately enough to not need “adjustment” so if climate scientists cannot do this then they need to call in professionals to do the job. This is low grade commercial quality requirements not the requirement for a life critical function the climate scientists claim climate change to be.
Any self respecting engineer would know the need for a proper reference network for at least a dozen of the sites for calibration and for regular inspection of the sites for compliance with the specification for measuring sites.
If reference sites and calibration ones do not match then that is the uncertainty so any deviation in that order cannot be used as evidence in the case.
I would dispute the claim “their predictions suck”. To begin with they get the average temperature of the
Earth correct to within 0.2K which is an error of less than 0.1%. So my question would be are there any other models of the Earth’s climate that are as accurate? And secondly if an error of 0.1% counts as “sucking” what would you consider to be good?
By reference to figure 4 in the following paper, your claim is wrong that:
«´they´ get the average temperature of the Earth correct to within 0.2K which is an error of less than 0.1%.»
The models miss a lot more than that, both on the: top of the atmosphere radiation imbalance, and the average temperature of the Earth. Tuning the climate of a global model – Mauritsen et. al.
Thanks, Germonio. As I said, they are tuned to the historical temperature, so that is not a valid test. When they are tested on what they are not tuned on, they do very poorly.

In addition, they don’t agree with each other to within 0.2K. Consider:
w.
Mirror mirror on the wall,
Who has developed the dumbest model of them all?
Yes. That’s a classic sign of overfitting: The model is so fine-tuned to existing observations that they output too much of the noise in the observations; when provided a fresh batch of observations (inherently laden with signal and noise), the model performs poorly.
The model is an expression of the theory, and to the extent that the model generated testable predictions is the extent to which it is scientific. Importantly, to the extent that the model parameters can simply be tweaked to fit existing data is the extent to which the model is circular and, therefore, pseudoscientific: you are tweaking the parameters to fit existing observations because you just don’t have a good explanation of the parameters in the first place. I would say that sometimes this is a necessary evil, but it absolutely must be repeatedly acknowledged to be so by the model’s creators, purveyors, and consumers. And this last part is just not happening at all it seems.
Germonion: The Temp of the entire Earth is not know, nor is it knowable, especially down to a fraction of a degree.
The models only do that…because the temp of the earth changes so slowly
How can you be so sure “the temperature of the earth someone tells is exact within 0.2K degrees plus minus 0.1%” Perhaps the temperature of the earth is exactly 0.2 K plus-minus 5K. I think we should consult the book of Daniel to figure put what is exactly the temperature of the earth in some concrete year.
Interesting. Much of the historical temperatures the models are backcasting were taken to nearest degree. So that’s +- 0.5 degree. So how can a model get results that are more accurate than the input data?
What a smart trick, using degrees K so that model errors come out to a very small percentage:- just 0.1%. Wow, that’s incfedibly accurate, we’re all super impressed. But there’s one teensy weensy little problem …
… the change in CO2 concentration that supposedly caused this change is around 0.01% of Earth’s atmosphere. Far from being teensy weensy, a 0.1% error somehow looks absolutely gross against an atmosphere changing by 0.01%.
All of which demonstrates that rating results based on their percentage of something else is mathematical madness.
Fact is, we aren’t even looking in the right place for global warming. We should be looking for total heat content of ocean plus atmosphere (the parts of Earth that relate to climate), not just in the atmosphere. There are various rather obviuos reasons for this, one of which is that when our spurious atmospheric temperature increases because of an El Nino, that isn’t a warming, it’s a cooling (an El Nino is, you might say, one of Earth’s ways of releasing energy to space).
They don’t need to look for warming in the oceans. The models say it’s there! 😉
Using Kelvin, the total warming since the end of the little ice age is only about 0.2%.
IE, nothing to worry about.
G, your assertion is not true for three reasons.
Essay Models all the way down shows that in absolute rather than in anomaly terms the CMIP5 models disagree with each other by +/- 3C.
In the tropical troposphere they run hot by ~2x (Santer with incorrect stratosphere correction) to ~3.5x (Christy).
They failed to reproduce the Pause.
The difference between the models is almost an order of magnitude larger than the total warming (from all causes) since the end of the little ice age.
Weather station measurements are typically good to +/- 0.5 K. The measurements are then “corrected” and “homogenized” through a series of steps which, one by one, make them further and further from actual data. By the time they are all “averaged” to produce a number that has no actual meaning, they no longer represent anything that was actually measured. Models that match that number are tuned to do so, and represent extremely expensive, well-disguised curve fitting of meaningless data.
Germonio,
If you want to work in the Kelvin world, a predicted increase of 3 deg C over the next century is only an increase of about 0.1% per decade. Why should anyone get particularly concerned about that?
Germonio, the problem is that over 99% of the range is meaningless. You don’t go based off the absolute error. You go based on the relevant temperature range. Since the average changes less than 1C year to year, that means your error is 20% of your maximum range.
I hope you are being facetious. I just can’t tell anymore.
This one is a real beauty:
“When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years. Biases can be largely removed using empirical techniques a posteriori . The bias correction or adjustment linearly corrects for model drift.”
(Ref: Contribution from Working Group I to the fifth assessment report by IPCC; 11.2.3 Prediction Quality; Page 967)
So I too can be Carnac if I get to actually take a peak at the future and correct my prediction using knowledge of what actually transpired. What a racket!
Suddenly I remembered a comment from “Jeff Alberts” to that quote:
:D) :D)
‘posteriori’ — Latin, it is SO descriptive! Who knew.
Just pick a date and pick a temperature! We can make sure it all works out! Sorry, only one date and one temperature per customer/model.
Yeah – the concept of using the mean of models, or selecting the one that has the best fit … ah … posteriori – is absolutely crap.
I mean – judging by the model spread – selecting one model that has the best fit at one particular point in time, to prove that models work fine in general, or that that particular model will continue to fit – is absolutely crap. (And by models I mean climate models.)
Sorry – this seemed more appropriate inside my head than it did on print.
If scientists working on a stock-market model were able to tuned it to match the market results of the past ten year, I still wouldn’t trust it with my money to accurately forecast next year’s market. Would you?
Isnt that called a hedge fund?
Grass funds are increasing in popularity.
Totally.
And on so many levels.
1. The Computer is pretty well the/today’s ultimate appeal to authority. So, tell someone to do something because ‘the computer said so’ – what are they gonna do?
Its like throwing up of endless links & references. It makes impossible work for the fellow arguing against you.
There is nothing currently better for throwing up chaff and noise than computers.
2. How do you know the computer output is correct if not because it is what you expect it to be?
(You’re seeing your own reflection as I’ve said numerous times)
3. Mosher tell us us “computers are tools”
Yes, he’s absolutely right about a computer sitting there doing nothing. But give it to someone to actually use and it becomes like the big sharp knife in your kitchen drawer- it can equally be used for slicing cheese as it can as a murder weapon. Its the intent of the user – how do you prove or disprove that?
Certainly meeting the intended user face-to-face but, surprise surprise, computers are removing the need for that. At least according to the computer users.
Fantastic positive feedback. Again, because what drives the posited GHGE if not positive water-vapour feedback?
I think therein is THE major problem – computers and the belief of their infallibility.
“3. Mosher tell us us “computers are tools””
Perhaps he had this alternative definition in mind:
“tool:
One who lacks the mental capacity to know he is being used. A fool. A cretin. Characterized by low intelligence and/or self-steem.”
[snip . . . cut it out . . . mod]
Computers are often merely a substitute for thought. Rather than thinking through a problem ab initio, you develop a model (which of course merely expresses your assumptions and biases), then you throw numbers at it and announce the results as infallible, not to be questioned, because the computer says so.
All that a model tells you is that, if the world behaved in the very simplistic manner which your software models, then the future would be so-and-so. However, until you can demonstrate that your model faithfully includes every significant real-world factor, your model isn’t worth the paper on which the results are printed out.
Since current climate models omit vast swaths of significant real-world factors, such as cloud cover, and don’t have anything like the spatial resolution to model real-world effects, anyone who believes in their results is a fool.
Roger,
+1
For models of complex and/or chaotic systems it doesn’t even require that the model be simplistic to be impressively wrong. Even the smallest item of input not quite right creates nonsense in pretty short order. This reflects what is known about climate and the much greater amount that is not understood.
All scientific theories are models, usually systems of equations. The problem with computerized climate models is simply that they have never been validated. Of course validation would take 30 years (climate has previously been defined at WUWT as the weather averaged over 30 years).
Some things are actually simple. Liars and frauds try to convince us otherwise using my ‘favorite’ technique, BS baffles brains.
The tremendous technological advances of western society are due to the simple scientific method. It replaced the previous, in the west, reliance on ancient Greek experts. The epitome of the old approach is the Summa Theologica. In it, Saint Thomas Aquinas uses logic and appeal to experts to prove everything. If you want to debate how many angels can dance on the head of a pin your starting point should be the Summa.
Dr. Sophie Lewis is a real scientist and a real expert. That does not make her reliable and trustworthy. Experts are extremely fallible because of their overconfidence. It’s this overconfidence that lends Dr. Lewis her frisson of arrogance and self delusion. It reminds me of:
CommieBob,
Apparently Sophie has a PhD and an academic position with the putative title of scientist. However, I question whether she really has the mindset and thinking pattern of a scientist. She seems not to have a firm grasp of the Scientific Method and thinks that model results, which she personally approves of, is sufficient reason to endorse as being valid. She is living in a subjective, fantasy world where she thinks her education and job title are sufficient to suppress objections and obviate the need for further research.
Is this guy Willis a *snip*, or what?
* I just snipped this rather than trashing the whole comment because you are new here(?). Miss Swiss, we do not allow this type of post. Your posts that contain arguments with the substance of the article are welcome, personal attacks or gratuitous insults of the author or other posters are not.
Thanks – Mod
“Indeed. The models are an expression of the theory that CO2 causes warming. As a result, they are less than useful in testing that same theory.”
Except that is not the theory.
The Theory is that the climate of the planet is the result of
ALL external forcings
AND
Internal variability.
So if you build a model of the planet and you only include solar forcing and say volcanoes… Guess what?
Your climate model will really suck…
So you add Methane
it gets better
you add HFCs
it gets better
you add C02
And Dang, you can model this very complex thing and get answers that correct to within small percentages.
Now the climate model doesnt represent the whole theory. The same way CDF code doesnt represent and cant represent the flows of fluids. Further Nobody believes in the theory ( climate results from external forcing and internal variability) BECAUSE of the model. And finally if models were just crazy wrong, if they showed cooling from the addition of c02, we would know the models were wrong. because they violate what we have known since 1896. In short, models add nothing to the foundation of our knowledge. and if they fail, that says nothing about the theory. it rather means, the model is wrong, not the theory that it was struggling to represent.
In short. Destroy every model ever constructed. we still know what we knew in 1896 ( c02 caises warming, like ALL GHGs ) and we still know what steam engineer callandar knew: GHGs cause warming
I don’t think many of us here deny the physics of GHG warming, reduced to its simplest equation. What’s at issue is near-impossibility of modeling the Earth’s fantastically complex, chaotically-coupled heat flows. Especially the feedbacks which can counteract or even reverse warming, many of which are poorly understood and hence poorly modeled.
“You can model this very complex thing and get answers that correct to within small percentages.” As radiation is central to your argument, you have to use absolute temperatures. They are around 300 K, a small percentage – let’s say 2% – is 6 degrees K = 6 degrees C = 10 degrees F.
Mosher “Except that is not the theory.”
You haven’t gotten to the Theory Stage, you are barely in the Hypotheses Stage. Write back when you have the “in-depth explanation of the observed phenomenon”.
“Hypotheses, theories and laws are rather like apples, oranges and kumquats: one cannot grow into another, no matter how much fertilizer and water are offered,” according to the University of California. A hypothesis is a limited explanation of a phenomenon; a scientific theory is an in-depth explanation of the observed phenomenon. A law is a statement about an observed phenomenon or a unifying concept,
https://www.livescience.com/21457-what-is-a-law-in-science-definition-of-scientific-law.html
If this was about CO2, there wouldn’t be supercomputers crunching the numbers – it could be done on the back of a packet of Craven “A”. Instead it is the rather more inscrutable feedbacks. Feedbacks as modelled claim calamity; this means that civilization has to be destroyed in order to be saved.
Steven Mosher: “And Dang, you can model this very complex thing and get answers that correct to within small percentages.”
By introducing Celestial Spheres, planetary orbits could be modeled to ‘within small percentages’ with Earth as the center of the solar system.
This is core issue. A model cannot verify a theory, because there are many (perhaps infinite) models that could produce results close enough to the observed (and highly abstracted and/or AVERAGED) behavior of any system.
And that is why the falsifiability of any theory is so important.
And that is what modes are very useful for: Falsifiability. (i.e. in ability to predict the abstracted and averaged behavior, with measured/known changes of their supposed “exogenous” variables).
And the GHG driven climate models have done an excellent job of that.
That should have ended this debate years ago. And climate scientist should instead be looking hard at the many other variables and missing dynamics (i.e., relationships) that they have failed to understand or put into their model, or even consider important. Fortunately a few are (I think).
You get answers that are meaningless! The answer doesn’t tell you which parameters are wrong, in which direction they are wrong or by how much they are wrong. They don’t tell you which factors you “forgot” to include or are utterly unaware of. They don’t tell you how the various factors interact. So even if you get a “correct” answer it means absolutely nothing! These models aren’t even used to get or test information.
They are used to put a scientific stamp of authority on a computer game for political purposes!
you add C02….how much warming do you tell it CO2 causes?
“And Dang, you can model this very complex thing and get answers that correct to within small percentages.”
Because the temp of this planet only changes in small percentages…
Oddly enough….if the models were tuned to past raw data….their linear predictions would be more accurate
….which, if anything, shows that adjustments to past temps, cooling the past……is fake data
I’ll give you that. What he told us would result in a completely non-alarming warming of maybe a degree and a half warming per doubling of CO2.
CAGW happens only if there is positive feedback. Even if we totally ignore natural variability and attribute all the warming in the last century and a half to CO2, the evidence is that the net feedbacks are negative. link
“I’ll give you that.”
I’ll take your gift back. We know only that it has a potential to cause warming. Negative feedbacks can attenuate, or even completely squash that warming potential.
But chaos – what about chaos?
Yes. What about chaos? Very good point, one that not only cAGW advocates sweep under the rug, but also sceptics? Why would that be?
The ground truth is that chaotic systems can’t be modeled predicatively. Why you might ask? That was the question that drove Edward Lorenz to develop chaos theory back in the 1950’s. You knew Ed was a meteorologist already I suppose and you posed a rhetorical question, but it deserves discussion.
What about chaos?
As a species we lack the mathematical skill to model chaotic systems. Quantum theory suggests we won’t ever be able to. The Navier-Stokes problem demonstrates we certainly can’t now, using existing mathematics and computational methods, which aren’t to be confused with computational abilities; faster computers won’t solve this problem.
So, what about chaos? And why are we having this silly debate?
Yeah, but what we “knew” in 1896 changed in 1906 when Arrhenious admitted that he’d overestimated the impact of a doubling of CO2 by 250-300%.
Correct, Aphan.
…and it is at this point that the Mosh will disappear.
Now, please add water vapor.
Please provide parameters for Warming potential AND negative feedbacks for water vapour. Not theoretical or modelled, tested and verified.
When you say that the models get better as you include more GHG components, you must be referring to the way they give a fair correspondence between temperature anomalies and the models during the 1975-2000 warming period. That is often cited as a reason for confidence in the models. But the models simply do not track the data well during the 1915-1945 period when the earth warmed at a similar rate. GHG concentrations were too low to have much of an effect on either reality or models in that period, which leaves natural variability as the likely cause of that warming — and a strong suspicion that it was also the cause of the 1775-2000 warming.
Not to mention the previous interglacial warmings of the current ice age.
“…the models get better as you include more GHG components…”
Funny, the more terms I include in a polynomial or Fourier series representation, the better I can fit the data, too.
That’s what it comes down to. It is the same process. Curve fitting. And, the more complete your basis functions, the better you can make the expansion fit.
Far from being evidence in favor of the models, it mere tautology.
@Bartemis:
But no one ever publishes a model based on my personal favorite independent variable; historical pork belly prices on the Chicago Exchange.
I’ve advanced this factor for consideration many times in many different forums and venues. No one has ever included it in study. I feel bad and I need to find a safe space. I think I’m being bullied.
I need a support group. Is there a support group for middle aged white male statisticians?
Mosher ==> The actual questions that need to be answered in today’s world are:
Supplying trivial answers to trivial questions [“GHGs cause warming”] is not climate science — it hardly even scores as advocacy or politics.
“…do more GHGs cause more warming?”
YES! That is THE question. This is a dynamic system. It is not required that it respond the same in all states. We want the incremental sensitivity. It is quite possible to have a GHE that works up to fundamental limits, and then peters out beyond them.
Bart ==> It is not only a dynamic system, it is composed of (at least two) coupled non-linear dynamic systems — many bets are therefore way off.
Mosher,
You said, “…,if they showed cooling from the addition of c02, [sic] we would know the models were wrong.” So, you feel that they would have to be completely ‘bassackwards’ before they should be invalidated? What about a quantitative difference that makes them unsuitable for the purpose of forecasting? The current models MAY have the trend right, but if the magnitude is wrong, then they aren’t really useful for long-range forecasting, which is what they are being abused for (double entendre intended). You should ask yourself just what is the purpose of all the money spent on climate models and whether that purpose has been achieved. David Middleton’s graphs suggest not!
@Mosher – I don’t understand how you can get even vaguely accurate results without including cloud cover, water vapour and the effects of tropical thunderstorms amongst other factors ?
And another question – why do you describe the results as accurate when no climate model has been accurate in predicting temperatures ?
I simply don’t understand your comment “if they showed cooling from the addition of c02, we would know the models were wrong.” – how could they possibly do that when they are programmed to treat CO2 as warming ?
Mr. Mosher
“if they showed cooling from the addition of c02, we would know the models were wrong.”
Completely wrong conclusion.
If the MODELS showed cooling from the addition of CO2 where the THEORY predicts warming, then we would know that the THEORY is wrong, especially when the temperature DATA trends do not follow the CO2 data trend.
Old England asks: “why do you describe the results as accurate when no climate model has been accurate in predicting temperatures ?”
I don’t really know why Steven does this, you’re right that it makes no sense and anyone who can read a graph can see the models are just so wrong there’s no doubt of it, but he and many others persist in saying they’re right anyway.
It’s as if they think we’ll all just suspend disbelief and agree with them. Maybe if there’s enough of them saying it, and they say it long enough, we’ll all just abandon logic and agree?
I think that’s really the strategy they’re depending on. As I recall it was one successfully used by the German National Socialists back in the 30’s. Some guy named Herman I think? Could be wrong about that, but I’m pretty sure it’s a famous kind of propaganda.
Old England: Then there’s always the more pedestrian “I Want To Believe” axiom promoted by Fox Mulder on the TV series “The X-Files”. That might also explain Steve.
Steve Mosher writes: “And Dang, you can model this very complex thing and get answers that correct to within small percentages.”
We see this claim repeatedly endlessly Steven, over and over the claim is made that “the models are correct to within small percentages”, but it flat out isn’t true. Repeating the lie works for the addled, but it doesn’t work for anyone with a working brain.
Essentially, it’s just another version of the false “appeal to authority”; the model results have been published. They’ve been published by an authority. Pay no attention to the fact they’re demonstrably wrong, we think they’re right and we will brook no argument!
It’s no way to win a scientific debate Steve. Falls right on its face. It’s right up there with “97%”. It’s crapolla. Pure nonsense.
You show me a model that actually predicts climate and we’ll talk? Until then, you got nothin’ dude. Nothin’.
PS: And what’s with this “averaging” nonsense? The idea you can take the outputs of a hundred or more unique models, average them, and get anything meaningful? This is basic experimental stats, you can’t do that. It’s tacitly wrong. Braindead stupid. Who let these fools out of their cage?
The biggest problem with tuning the models to past climates is that most of the factors that impact climate are not known with any degree of certainty and the further back you go, the worse that problem gets.
Let’s just look at aerosols, however most of the other parameters that are used for tuning are just as bad.
How much was released in any given year and from where?
There are many types of aerosols each of which has a different impact on the climate.
Things like the height of the stack and weather conditions at the time of release have a huge impact on how long the aerosols stay in the air and how far they spread.
In places like Europe and the US/Canada, little is known about how much and what types of aerosols were released prior to the existence of the EPA and companies were required to keep track of that stuff.
For the rest of the world data is sparse to non-existent.
As a result the “modelers” are permitted to pick whatever number is needed to make the numbers work.
So yes, they are able to model historical temperatures, but it has nothing to do with whether the models are accurate or not. Just that they have enough wiggle room with their parameters to make it look like they are accurate.
“The biggest problem with tuning the models to past climates is that most of the factors that impact climate are not known with any degree of certainty and the further back you go, the worse that problem gets.”
No, the biggest problem is it requires using an empirical model to predict a system’s behavior outside the period of observation. That’s a fundamental no-no in statistical modeling Mark. Never valid.
Example: I have measurements of tree ring widths and temperature over a period of 100 years. I fit a regression model to those values. I have no physical theory to support that relationship, I simply observe agreement.
I can, legitimately, use such an empirically derived model to predict the value of temperature given the width of a tree ring within that 100 year interval, but I cannot use that model to predict the value of temperature outside that time period. That’s “extrapolation” and it can’t be done using an empirical model.
The procedure is statistically/experimentally invalid. We can never extrapolate from an empirical model. It’s a rule Mark.
“if they fail, that says nothing about the theory. it rather means, the model is wrong, not the theory”
Good, so CAGW is just a theory, not an absolute irrefutable fact. Please tell Al Gore etc.
Steven Mosher
August 11, 2017 at 2:04 am
“In short. Destroy every model ever constructed. we still know what we knew in 1896 ( C02 causes warming, like ALL GHGs ) and we still know what steam engineer callandar knew: GHGs cause warming”
——————–
Short…….and to the point, I think.
No need for models, we do already know about GHGs, and can\t risk to have that confused by the models…..
Besides, what else to do if we figure things that fast,,,,,,, 1896, is like yesterday…:)
Thanks Mosher…
cheers
Mosh,
“because they violate what we have known since 1896”
Who is the “we” you refer to in that statement? Because I’ve known for years that Arrhenius “knew” that the calculations he’d arrived at in 1896 were wrong. He publicly changed them in 1906. I’ve also known since viewing the website linked to below, that many of Arrhenius’s assumptions were wrong, his attributions to other scientists were false, his methods flawed, and that his theory of “backradiation” seemingly violates the laws of thermodynamics.
http://greenhouse.geologist-1011.net/
“However, Arrhenius’ calculations are based on surface heating by backradiation from the atmosphere (first proposed by Pouillet, 1838, p. 44; translated by Taylor, 1846, p. 63), which is further clarified in Arrhenius (1906a). This exposes the fact that Arrhenius’ “Greenhouse Effect” must be driven by recycling radiation from the surface to the atmosphere and back again. Thus, radiation heating the surface is re-emitted to heat the atmosphere and then re-emitted by the atmosphere back to accumulate yet more heat at the earth’s surface. Physicists such as Gerlich & Tscheuschner (2007 and 2009) are quick to point out that this is a perpetuum mobile of the second kind – a type of mechanism that creates energy from nothing. It is very easy to see how this mechanism violates the first law of thermodynamics by counterfeiting energy ex nihilo, but it is much more difficult to demonstrate this in the context of Arrhenius’ obfuscated hypothesis.”
You might check out his Most Misquoted Scientific Papers section too.
One point, that I have never seen discussed, is that the machine learning models have supposedly three phases.
The first is the learning phase, where they run iteratively against the real historical dataset in order to produce a good fit.
The second part is where they run against a more recent part of the historical dataset, that wasn’t included in the first phase, and demonstrate that they can track that.
The third part is where they are allowed to run free and predict the future.
The argument is that the second part ‘proves’ that the model can be trusted. In reality it merely forms an extension of the learning phase, because it would take a very peculiar person to release a model to the world that failed the second part.
“it would take a very peculiar person”
Or maybe someone not smart enough to make an honest living.
If you’re a third-rate student taking a PHD just because you can’t face leaving school wouldn’t you choose a subject where no-one questions your work as long as you reach the ‘right’ answer? It’s been going on for so long virtuaully everyone in climatology fits into that category.
Yep. Selection bias. Texas sharpshooter’s fallacy.
Greg: first it’s important to understand how machine learning (the type you describe) works. Only then can you understand why a five year old who’s learned to tell the difference between a car and a cow and correctly classify a 1971 Porsche 914 as a “car” and not a “cow”, even though that child has never seen a 1971 Porsche 914.
This is an example of poor reasoning by analogy.
Cows don’t have wheels… 🙂
1) Models are not useful to glean the future.
2) Models are useful to highlight the things you do not yet know.
If you claim that your models are accurate then you still can’t do 1) and you completely miss out on 2).
Best comment by far.
I agree with your comment in a general way, Ed, but when you plug in a significant number of variables you don’t really get anything that tells you which ones are right or wrong. If a baseball game goes 20 innings and ends 33-32-what single event, pitch, swing, catch, error, stolen base, injury, coaching decision, etc, decided the game. I have simplified for the purposes of modelling!
You know John, over the years we statisticians have made some (arguably barely useful) progress on that subject.
If you look at the dark art of multiple regression (AKA Principal Factor Analysis in its more advanced form) you’ll discover the “F to Enter” test, which sets a threshold of acceptability for a variable to enter the regression. In essence, if the addition of a variable doesn’t significantly improve the model fit, it’s rejected.
This has the unfortunate side effect of promoting what I call the “Cuisinart” approach to model development; the investigator collects all data that might possibly be relevant to determining the value of the dependent variable, pushes the button, and waits for the computer to spit out a model.
It’s about as far from science as you can get and still use statistics.
Sorry, “Principal Component Analysis”. Factor, component, it’s all good…
Climate Science is a religion.
The faithful followers of Climate Science do not need any real world test or reality check.
They just know.
So don’t pester them with any test of falsification.
Climate models are very expensive opinions, not worth the air they are describing
Why don’t we all go for the honest option and call models what they really are – computer games?
Yep. Virtual reality generators designed to keep sunflower seeds in the feeder. Let climate hamsters create a model that actually aligns with reality and see how quickly those seeds disappear..
Curley,the climate hamster: ICISIL, I resemble that remark! Now bug off and let me climb back in my wheel so I can save the world.
http://www.urbandictionary.com/define.php?term=I%20Resemble%20That%20Remark
This is not your longest post but it may be one of your best.
My thoughts too. I have always held that the climate models synthesise everything that is known, surmised and conjectured about the climate and what drives it. They seem to be collectively and individually wrong so there are gaps in the knowledge and/or errors with the theories.
Good one w. What happens next?
~
All models are wrong. Some are useful. Climate models are not among the useful.
“Falsifiability is the idea that an assertion can be shown to be false by an experiment or an observation, and is critical to distinctions between “true science” and “pseudoscience”.”
Wow, you can’t say it more clearly
Climate science —》 pseudoscience
But leter on he somehow forgets this statement and participates is this colective cognitive allucination we call AGW and comes up with:
“Climate models are carefully developed and evaluated based on their ability to accurately reproduce observed climate trends and processes. This is why climatologists have confidence in them as scientific tools, not because of ideas around falsifiability.”
pure lunacy!
Willis asks:
Taking the multi-model mean (MMM) as a proxy for all models, and concentrating on the CMIP5 surface models only (because I don’t have data for the lower troposphere models), then I would have to say that the climate models have been useful in at least one respect: they have correctly projected the direction of travel, i.e. continued warming.
That sounds a little trite, because you might argue that, starting from 2005 as I believe the forecast periods in these models do, there was at 1/3 chance of continued warming anyway (the other options being cooling or zero change). But that observation has the benefit of hindsight. Recall that since 2005 there have several scientists and commentators predicting imminent cooling, based variously on changes in the PDO or solar output, etc. Don Easterbrook springs to mind; so too David Archibald, to name but two who had their cooling forecasts featured here at WUWT.
Those cooling predictions have demonstrably failed. The CMIP5 models (again using the multi-model mean as a proxy for all the models) have remained inside the temperature projection envelope and have even by some measures exceeded surface temperature observations. For instance, the year 2016 was warmer in reality that was projected by the CMIP5 MMM; though using a longer rolling average the models are still on the cool side of the MMM; but not by as much as some here seem to believe.
So I would summarise by saying that as a basic predictor for the long term direction of surface temperature travel, the CMIP5 surface models have been pretty useful; certainly much more useful than those models generated around the same time that foresaw only cooling.
How would we know? They have used models to adjust the recorded station data, thus making the surface data simply another subset of the models. The surface data is constrained somewhat by the underlying readings taken in the real world, but only somewhat. The fact that the UHI night-time warming trend from the city gets smeared across all the rural sites during the homogenization and the fact that other model based adjustments are made means that the “empirical baseline” data is already woefully polluted.
OweninGA
We know that from 2005 onwards the warming trend in the the surface data sets is consistent with the warming trends seen in the lower troposphere data sets. UAH is the coolest, but it’s still 0.20 C per decade warming since 2005. The other satellite TLT set, RSS, shows 0.23 C/dec warming over the same period; the same rate as HadCRUT4. GISS and NOAA are only fractionally warmer (0.25 and 0.26 C/dec respectively).
So if we’re going to say the surface data has been improperly adjusted upwards since 2005 then we’re also going to have to call out the satellite data sets for doing the same thing. The alternative is that both are right, and it really has warmed at a rate of 0.2 – 0.25 C/dec since 2005, roughly what the surface models projected.
Your time series results include several large El Nino events which are natural and have nothing to do with increasing atmospheric CO2 and whose effects are not included in any CIMP5 model. Yet you include them without mention in your conclusion about modeled surface warming. Very strange.
Doonman
The time series runs from 2005, since when there have been 3 El Nino and 3 La Nina events:
What’s ‘very strange’ is that you choose to mention the natural warming effects of El Nino periods but chose to ignore the natural cooling effects of the La Nina periods. Are you saying we should subtract all the natural warming from the observations but shouldn’t compensate for the natural cooling? Sounds like a good way to introduce a cooling bias.
As far as I know the models do incorporate ENSO events, though in a random way since obviously the exact timing of such events can’t be predicted. This is one of the reasons for the variation in the model outputs.
DWR54,
It isn’t sufficient to have the sign of the trend correct. To be useful, there must be a very small quantitative error in the slope of the trend. It makes a large difference in the proposed response if there is one or two orders of magnitude difference between reality and the modeled reality.
Would you say that the CMIP5 models have been more or less useful than the several other models initiated around the same time that projected cooling over the same period?
I would say ‘more’ useful.
DWR54 August 12, 2017 at 2:08 am
Thanks, DWR. First, which are the “several other models initiated around the same time” that are NOT part of the CMIP5 group?
Second, you say they’ve been “more useful” … but for what? What actual, actionable information have we gotten from the CMIP5 models?
Regards,
w.
Hi Willis, thanks for the response.
I was referring forecasts by, specifically (since they were featured on this site), Don Easterbrook and David Archibald. Archibald’s was published in the trade journal Energy and Environment; Easterbrook made his cooling forecasts on blogs only, as far as I can tell.
Insofar as they predicted continued warming, the CMIP5 models have been useful. If you were a betting man in 2007/8 (which I’m sure you’re not) and you had a 3-way choice to bet on:-
1. Continued warming;
2. No change; or
3. Cooling
Then you would be a happy man had you listened to the CMIP5 model projections. Less so if you had paid attention to Easterbrook or Archibald.
Please explain the pause using model inputs consistent with those used in the model which provides the most accurate hindcast.
A ‘pause’ or even an ‘acceleration’ are easy enough to generate in any global temperature data set due to natural variability, such as ENSO or aerosols, etc, provided that the period chosen is of a short enough duration.
DWR54 writes: “Taking the multi-model mean (MMM) as a proxy…”
You understand that “taking the multi-model mean” is a procedure that’s so far beyond acceptable in the realm of science and statistical methods it should never have been published in a respectable journal?
Seriously. What you support, the method proposed, is completely without merit. It’s absolute junk. The worst sort of lie.
The multi-model mean is just a way of averaging the output of all the models. It’s been used many times by authors on this very website, such as Bob Tisdale and others. There’s nothing wrong with it, per say.
If you prefer spaghetti graphs then you can just run the whole ensemble and add the observations to those. They will be somewhere in the middle of the pack.
DWR54 August 12, 2017 at 2:13 am
Well, yes, there is something wrong with it. Unless we know that the models are a) independent and b) completely explore the parameter space and c) have been verified and validated, it’s just garbage in, garbage out. However, for the CMIP5 models none of those is true.
You appear to be mistaking graphing an average of the models (which you surely can do), for that average having some meaning and some greater validity.
Regards,
w.
Very well expressed Willis. Thanks.
DRW54 writes: “The multi-model mean is just a way of averaging the output of all the models.”
Yes of course it is. But why would you rationally combine the average length of a fish with the average length of a mammal? You wouldn’t. Why?
Because it tells you nothing. If you average the length of a Pacific Smelt with the length of an African Giraffe, you’ll certainly get an arithmetic average, which tell you exactly nothing about smelt or giraffes, and that would be the point.
This is very basic statistics DWR, very basic. Would you like a complete treatment of this subject? I suggest Box, Hunter and Hunter, “Statistics for Experimenters”, 4th edition.
Not rocket science DWR, but science anyway.
Willis Eschenbach
Rather I was simply using the multi-model average as shorthand for the model spread as a whole. What’s wrong with that exactly? Bob Tisdale did it here for years without any negative comments that I’m aware of.
The average doesn’t necessarily have some greater meaning or validity; it’s just a handy way of showing how the models, as a group, are doing against observations.
Bartleby
Erm, who’s doing that??
I’m comparing the average of all the CMIP5 models with observations.
Indeed.
DWR writes: “What’s wrong with that exactly?”
What’s exactly wrong with that is you’re pretending different things are the same. Smelt aren’t Giraffes, even though the math lets’ you average them. In the same way, no two climate models are similar enough to support the idea of a meaningful average.
I mentioned the book, it seems you may not have consulted it? You need to study the subject a bit or you can just take my word for it. I’ve summarized here but if you seek validation you’ll need to study a bit more. I can say without doubt that what’s being done by “averaging” the output of various climate models is pure junk.
More poetically, “junk of the purest form”.
Do they tell us how many runs they throw away? How many fail sanity checks and are aborted, how many produce an unwanted result and are stuck at the back of a drawer somewhere? Is there a set of limits imposed on the models during or post run?
Rule one of climate ‘science’ , if the values differ between model and reality , it is always reality which is in error , takes care of any such problems.
For who needs facts when you have ‘faith ‘
[ if the values differ between model and reality , it is always reality which is in error ]
Reminds me of an Andy Rooney anecdote when he said he remembered an event that happened while he was a field reporter during WW2. He referenced his notes from the time and saw that they contradicted his memory, so he concluded that his notes were wrong.
… just what is it that are they [models] good for?
Any story you want to tell.
It’s not the models per se. They’re deterministic expressions of ideas, as you say. It’s the premises they assume that are the culprits. Models obscure the foundations of the the argument by substituting a simplified, easier-to-grasp visual representation. The caveats and erroneous assumptions are lost to the conclusion. Sometimes these defects don’t matter — think fluid dynamics models engineers use — where model output is good enough for the design objective. In climate science it does matter because model results are incapable of identifying the causes of change, the actual design objective. The best, what they can do is to find an “association” between measured variables — and then only with poor predictive power. At worst they tell a story that borders more on fable than non-fiction.
Gary writes: “think fluid dynamics models engineers use — where model output is good enough for the design objective.”
Gary, there’s a very large difference between the uncertainties of computational fluid dynamics (or, for that matter, thermodynamics) and the current “state of the art” in climate modeling.
As you mention, CFD modeling is useful. It isn’t precise, which is why we have wind tunnels, but it’s useful. The reason it’s useful is that it’s based on accepted theory drawn from physics. Not only is it based on accepted theory, it’s limitations are well understood. It’s a limit of mathematics that prevents CFD from being entirely predictive. That limit is summarized as the Navier-Stokes problem.
Climate models don’t have this excuse (though they do have the same problem), nor are they “usefully” predictive. In fact they’re so wrong they’re laughable. They should have been ash-canned years ago. There is no underlying physical theory to support them, which is why they “aren’t even wrong”.
I’m a bit tired of seeing this comparison made, please excuse me.
I have been quite impressed and somewhat intrigued by the writings of Emeritus Professor Munshi, what I understand of them that is. Quite an eclectic range of subjects but usually very readable and well constructed. Particularly some of the time series ones (and Chess). For those interested, here is a link to some of his published papers.
https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=2220942
Thanks, the paper Willis cited is well written, but the subject matter is obscure and requires more than a few minutes thought jammed in between pulling weeds, washing the mutt and hooking up another wire in the remote entry unit I’m trying to wedge into an ancient Nissan, . I’ll be interested to what Munshi might have to say on other subjects.
I think models are essentially like an equation, they produce an output provided the parametres are correctly known and the equation is correct for that model.
So x+y=z.
However just like in mathematics, the values for x and/or y might be unknown, as well as the equation itself might not be relevant with respect to the value of z, therefore the model can only be useful if x and y have high certainties, and the equation is also appropriate. So 1+2 =3 but 1x2does not =3. And if either x or y is unknown then one doesn’t know what z is. (eg x+2=?).
The more uncertainty and the more parameters the less likely the output is correct. And one also has to know the correct equation is being used.
Models are really only useful where there is reliable information on parametres x,y; there are fewer and high certainty variables, and the equation is appropriate, meaning there is only a small degree of uncertainty in which the model then addresses and functions to provide the output. It essentially fills in gaps where small uncertainties exist, it does not fill in large uncertainties. With high degrees of uncertainty in either parametres or equation the models provide no reliable output or just false outputs.
Keep in mind that your equation depends on what is being combined. 1 + 2 does not equal 3 if X is units of water and y is units of alcohol. And a model is not the system being modeled.
Thingodonta,
You left out the ‘tuning factor’ in your model! It should be x+y+k=z. With the appropriate selection of k you can get any result you need! (Do I need to add /sarc?)
“they have wildly overestimated the changes in temperature since the start of this century”
They’ve been wildly overestimating the changes in temperature for a lot longer than that.
Hey, they can wildly overestimate temperature changes hundreds of thousands of years into the past or future if you so desire. Try doing that with any other tool.