REPORT: $127 Million Climate Supercomputer No Better Than ‘Using A Piece Of Paper’

 

Energy

shutterstock_430257868-e1501768715227
Engineer by data storage system. (Credit: 430257868/Shutterstock)

From The Daily Caller

Michael Bastasch

11:19 AM 08/03/2017

A new study using an expensive climate supercomputer to predict the risk of record-breaking rainfall in southeast England is no better than “using a piece of paper,” according to critics.

“The Met Offices’s model-based rainfall forecasts have not stood up to empirical tests and do not seem to give better advice than observational records,” Dr. David Whitehouse argued in a video put together by the Global Warming Policy Forum.

Whitehouse, a former BBC science editor, criticized a July 2017 Met Office study that claimed a one-in-three of parts of England and Wales see record rainfall each winter, largely due to man-made climate change.

Using its $127 million supercomputer, the Met Office found in “south east England there is a 7 percent chance of exceeding the current rainfall record in at least one month in any given winter” and “a 34 percent chance of breaking a regional record somewhere each winter” when other parts of Britain were considered.

“We have used the new Met Office supercomputer to run many simulations of the climate, using a global climate model,” Met Office scientist Vikki Thompson said of the study.

The Met Office commissioned the study in response to a series of devastating floods that ravaged Britain during the 2013-2014 winter. Heavy winter rains caused $1.3 billion in damage in the Thames River Valley.

Scientists said supercomputer modeling could have predicted the flooding. Thompson said the supercomputer “simulations provided one hundred times more data than is available from observed records.”

But Whitehouse said the supercomputer’s models did “not give any better information than what could be obtained using a piece of paper.”

Using observational records, Whitehouse argued the 7 percent “chance of a month between October and March exceeding the record for that month in any year is equivalent to a new record being set every 86 months.”

“New monthly records were set twice in the 216 October-March months between 1980 and 2015,” he said. “Therefore the ‘risk’ of a new record for monthly rainfall is 5.5% per year, according to the record.”

“Between 1944 and 1979, there were three new record monthly rainfalls – an 8.7 per cent chance of any month in a year exceeding the existing record,” Whitehouse continued, adding that “between 1908 and 1943, there were 4 record events – a risk of 14.5%.”

“The risk of monthly rainfall exceeding the monthly record in the Southeast of England has not risen, contrary to many claims,” he argued based on the observational data, adding the “Met Office computer models do not give any more reliable insight than the historical data.”

WATCH:

 

Follow Michael on Facebook and Twitter

Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact licensing@dailycallernewsfoundation.org.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
105 Comments
Inline Feedbacks
View all comments
Schrodinger's Cat
August 5, 2017 4:00 am

Strange how they think that polluting observational data with one hundred times more fabricated data can be considered to be an improvement.

August 5, 2017 4:04 am

I’ll assume October to March = 6 months with records beginning in 1910 as I understand it from http://www.bbc.co.uk/news/uk-25944823. When records began is important in calculating the probability of a record as will be clear from the following.
In 1910 the probability for a particular month to be a record for that month is 1 and assuming randomness.
in 1911 the probability is 1/2 then for subsequent years 1/3, 1/4 1/5.
By 2017 its 1/108 But there are 6 months for a record to occur in so the % probability of a record month in 2017 is (1-((1-(1/108))^6))*100 = 5.43 ie calculate the probability of no record in the 6 months and subtract from 1 to get the probability of a record. Multiply by 100 to get % probability
Now 5.43% is pretty close to the caluclation of 5.5 from the rainfall record. May I suggest that this is pretty consistent with the the climate in SE England being random as regards rainfall. Why is the Met Office wasting money on supercomputers when a simple bit of probability calculation vs the record implies the rainfall in SE England is random? And where is the anthropogenic climate change if it’s random.

Gamecock
August 5, 2017 5:28 am

It’s the age old use of brute force computing when you don’t actually know enough about what is happening to codify it.
They simply don’t know enough to be able to codify it.
Same with Climate/Global Circulation Models – we simply don’t know enough to program them. Bigger computers produce junk quicker, for more money.

I Came I Saw I Left
August 5, 2017 5:40 am

Sheet of paper: $0.02
Climate Supercomputer: $127 Million
Climate scientists spend $127 Million to prove they don’t know WTF they’re talking about: PRICELESS

Reply to  I Came I Saw I Left
August 6, 2017 8:06 am

If climate scientists really had to play with a supercomputer, one can build a machine from Raspberry PIs with 32 nodes for about $2000 (see the web for plans). It’s probably only as powerful as a standard PC. But it runs parallel programs and would probably reach the same nonsense conclusions for a much, much cheaper price.
Jim

August 5, 2017 5:51 am

The climate supercomputer was invented by fake scientists to fool gullible people. It’s the modern version of the charlatan’s crystal ball. Give me money and I will tell your fortunecomment image

SocietalNorm
August 5, 2017 6:02 am

Well, if the computer simulations did not approximately match the hand-calculated results, there would have been something wrong in the programming.

Ivor Ward
August 5, 2017 6:04 am

The Met office don’t even forecast yesterdays weather correctly. To think that Group Captain Stagg was able to predict a window of opportunity for the D-Day landings in 1944 without the aid of any super computers and now they cannot even get tomorrow right. So we are supposed to believe that they are right in 83 years time but not next week. If they dredged the rivers and drainage channels we would not notice the heavy rain. The EU says we have to let the land flood because………they say we have to let the land flood. They save a newt by drowning every small mammal in the area or some such nonsense.

Bob Turner
August 5, 2017 6:09 am

GWPF. Great at criticising other people’s work, but never make a sensible contribution themselves. I’ll take some notice of them when they finally produce the global temperature analysis they promised (with blaring trumpets and waving flags) 2 years ago.

Sun Spot
Reply to  Bob Turner
August 6, 2017 5:25 pm

Bobby, you said “I’ll take some notice of them when they finally . . . “, try asking questions that have some meaning.You’re asking the wrong questions so you don’t have to worry about the answers.

Bill Illis
August 5, 2017 6:28 am

The thing is, this study was widely hailed in the global warming community that extreme weather was proven to be more likely now.
Impressive sounding numbers, 7% chance, 34% chance, that no global warming believer would check or try to understand: just trumpet around the internet.
But it really is just the simple averages of the numbers experienced in the records and the basic simple math. I pointed this out on another board and at least one believer then checked the math but the rest just went along merrily citing more extreme weather is here.
Actually, the Met Office numbers were not even right.
First, there is really no trend in UK rainfall by region since the monthly records began in 1873 (maybe a tiny small increase but nothing significant. the monthly records are sufficiently random-in-nature that one could conclude there is no statistically significant trend).
So, there is a one-in-144 year (0.7%) chance of breaking a monthly record in any particular region in any particular month.
In any given winter (6 months * 0.7%) = 4.2% chance than any particular region will break a record in that winter.
The Met Office says it is now a 7.0% chance that a record will be broken in a region in a month but it is certainly just 4.2%.
Then they say there is 34% chance 1 of the 9 regions will break a monthly record in any given winter. 9 * 4.2% = 37.8% (their number is less than the basic math)
Climate scientists have always been bad at math.
And no climate model has been accurate enough on a very small regional basis to say it is the climate model results rather than the basic probabilities.

jorgekafkazar
Reply to  Bill Illis
August 5, 2017 1:06 pm

Climate Scientist: What number do you want?
Government Numpty: 6C degrees rise by 2100.
Climate Scientist: How much will you give me?
Government Numpty: £2,000,000
Climate Scientist: The answer is 6.062C. And it’s robust.
Who says Climate Scientists are bad at math?

Auto
Reply to  jorgekafkazar
August 6, 2017 12:36 pm

Jorge
That particular Climate Leech is good at knowing which side of her/his bread is buttered!
And to 3 dp, too!
Auto

JimH
August 5, 2017 6:31 am

Has anyone pointed out that in order to achieve claims of ‘unprecedented’ rainfall, they have ignored about 150 years of rainfall data from the UK?
Here’s the link to the press release: http://www.metoffice.gov.uk/news/releases/2017/high-risk-of-unprecedented-rainfall
Basically they have thrown away all the data prior to 1910, ie from then back to the start of the series in 1766, and replaced it with the multiple runs of their model of the UKs rainfall, then analysed their new ‘data’. Thus somehow proving that climate change creates extreme weather or some such nonsense.
If you look at the series from 1766 you will see current rainfall in the UK is nothing out of the ordinary, indeed worse events have been recorded through the series.

TA
August 5, 2017 7:01 am

From the article: ““The risk of monthly rainfall exceeding the monthly record in the Southeast of England has not risen, contrary to many claims,” he argued based on the observational data,”
There’s that pesky “observational data” messing up the CAGW narrative again!

August 5, 2017 7:19 am

Probably nothing wrong with the computer. It is the crap they are feeding it. Like using a Ferrari as a paper weight.

JBom
August 5, 2017 8:13 am

Poor UK Met Off. They should have read the Hitchhikers Guide to the Galaxy and payed careful attention, like taking notes, in particularly the parts about Marvin and his troubled existence.
Given the near certain unlikelihood of Forecasts, they could have saved a lot of Pounds-Loot by just as mentioned, using a pencil and piece of paper and perhaps a mobile to call up people all over UK to ask what they thought the next winter would turnout to be and figured an average with standard deviation.
Polls are no better than Super Computers with the UK Met Off’s “models” but least but at least the UKer’s could complain assured knowing that the “Forecast” was produced by Them and no one and nothing else!
Ha ha 😉
PS. Not to worry though about the ill-spent 100 million pounds cause that was no doubt payed for by the EU. It’s the loans from the EU that under writ the 1 billion pounds in damages that the Brexitters need worry now, cause in 2019 those loans could be called in and that will be a shock to the market.

John Robertson
August 5, 2017 8:15 am

So Super Computer equals Birds Intestines?
The fine art of Scrying the Future..
Guts of small animals,tea leaves or Special playing cards..all just as accurate as GIGO.
And so much cheaper for the taxpayer.

Tab Numlock
August 5, 2017 8:46 am

Could have saved a lot of money and improved accuracy with a big box containing a mechanical coin flipper.

Ed Zuiderwijk
August 5, 2017 9:09 am

2016. Big computer, faulty models. Result: rubbish.
2017. Bigger computer, faulty models. Result: rubbish, only more detailed now.

Reply to  Ed Zuiderwijk
August 5, 2017 11:08 am

Yeah, but it”s the P.R. value. Computer output = must be right; supercomputer output = REALLY must be right. Thirty-some years ago when I was a math tutor, I saw the same thing with calculators. Kid types in 6 x 6, key sticks so he really types 6 x 66, gets 396, writes it down, keeps going.

August 5, 2017 9:26 am

Weather models fail at periods of, typically around 2 weeks because of the chaotic nature of the atmosphere.and the inability of the models to capture
They must take into account several large-scale phenomena, each of which is governed by multiple variables and factors. For example, they must consider how the sun will heat the Earth’s surface, how air pressure differences will form winds and how water-changing phases (from ice to water or water to vapor) will affect the flow of energy. They even have to try to calculate the effects of the planet’s rotation in space, which moves the Earth’s surface beneath the atmosphere. Small changes in any one variable in any one of these complex calculations can profoundly affect future weather.
In the 1960s, an MIT meteorologist by the name of Edward Lorenz came up with an apt description of this problem. He called it the butterfly effect, referring to how a butterfly flapping its wings in Asia could drastically alter the weather in New York City.
Climate models are much less effected by this butterfly effect because their equations are different, based more on processes that relate to the radiation involved with the heating/cooling of the planet that are not subjected to as much chaos.
This would be most true if you can nail down all the long term forces involved and represent them and their changes with high confidence in your climate model. This is where we enter the realm of speculative theory and not fact and especially in the current world, not objective science.
Simplifying the case, we can keep all things equal in a climate model, then dial in X amount of greenhouse gas warming from CO2 and its feed backs(for example additional warming from increasing H2O). When we run this thru something like 100 climate models with slight variations in some of the equations (ensembles) and come up with a very similar outcome, it provides higher confidence in the solution, for temperatures let’s say.
However, every ensemble member is programmed with a similar inadequacy to represent natural forces from natural cycles. This would still be ok, in a world where all things remain constant except for the increase in CO2.
This would also be ok in a world where we know with high confidence, what the sensitivity is in the atmosphere to changes in CO2(feedbacks and so on included).
But we don’t. All we have is a speculative theory that represents our best guesses…………that have been lowering the CO2 sensitivity for 2 decades. The biggest issue regarding this is the selling of these climate model products as being nearly infallible, settled science that provide us with guidance that can be imposed with impunity.
Taking this out into a realm of even higher speculation is when we take climate model output and try to use it to predict long term weather(which is climate) in specific regions or locations.
Even if we could nail down GLOBAL warming to +1.8 deg C and can use that to forecast heavier rains and more high end flooding events globally and more melting of ice or other global effects, the ability to try to pinpoint specific regional “weather” effects(averaged longer term-which is climate) is very limited……yet, this big drop in the expected skill of such projections is not properly communicated. It’s all presented as part of the “Settled Science”.
This is the opposite of using the authentic scientific method. We should present theories objectively using honest assessments of confidence…….that CHANGE as we learn more.
When the disparity between global climate model temperature projections and observations grows, as it clearly has, the scientific method dictates that we adjust the theory to decrease the amount of warming(or go in whatever direction the data takes us)………..instead of waiting for the observed warming to “catch up” to our preconceived notion based on a clearly busted version of the speculative theory that best supports political actions. Ego’s, cognitive bias, funding and political affiliation of climate scientists, modelers and others should play no role in the outcome of scientific products…….yet, they actually define the field of climate change/science today……..taken by most to be synonymous with “human caused” climate change.

Gamecock
Reply to  Mike Maguire
August 5, 2017 9:50 am

“The biggest issue regarding this is the selling of these climate model products as being nearly infallible, settled science that provide us with guidance that can be imposed with impunity.”
Why would they need any computers at all, if this is settled? Their actions say that all that came before was junk. As we said. Today’s silly report will be obsoleted by the next bigger computer report.

August 5, 2017 9:44 am

Whaaaaaaaatttt?
“Scientists said supercomputer modeling could have predicted the flooding. Thompson said the supercomputer “simulations provided one hundred times more data than is available from observed records.””
So….supercomputers are taking actual REAL observations….and MAKING UP 100X more “data”???
Because this super computer is a time machine??
And yet, even with 100X more data…they STILL aren’t better than basic math.
Perfect.

Gamecock
August 5, 2017 9:47 am

‘Thompson said the supercomputer “simulations provided one hundred times more data than is available from observed records.”’
Output from simulations IS NOT DATA!

Reply to  Gamecock
August 5, 2017 6:42 pm

They are random numbers. The supercomputer is just a super fast random number generator. Then it applies statistical analyses to the random numbers and outputs probability curves. That’s really what a simulation is. But supercomputer sounds impressive, random number generator is laughable
A slower random sequence generator
http://slotsetc.com/images/pictures/slots/jennings/jennings_buckaroo.jpg

Gamecock
Reply to  Dr. Strangelove
August 6, 2017 5:36 am

Reminds me . . . I wrote my own random number generator 35+ years ago, because I wasn’t happy with the operating system RNG, which was notoriously NOT random.
I took the floating point system time, moved it to an integer to strip off the decimal portion. Then I subtracted that integer value from the saved system time to get the decimal value. Then I multiplied the whole number by the remaining decimal number. I figured that was about as random as I could get out of a PDP/11.

4caster
August 5, 2017 10:15 am

I believe that weather forecasters need to look at the sky, for several reasons: obtaining clues for changes in the near term, say the next 6 hours, to lend confidence to remotely sensed data and computer model output in order to validate what they are seeing virtually; and not least importantly, to also gain experience over the long haul to marry observational information to forecast outcomes. Meteorologists in their 30s and even 40s have not and are not being trained first as weather observers, as were those of us older folks, as automated platforms and sensors took away much of the need for that. I believe that many of today’s forecasters are sorely lacking in the ability to maximize their forecasting acumen, as they have less ability to understand what they are looking at in the sky, if they even bother to tear themselves away from the computer screen array. The old-timers had observational platforms, even cupolas, to accurately and consistently gauge the sky. I believe there IS a place for windows in meteorological offices, and forecasters need to continue to be trained in the observational practices to enhance weather forecasting. In the U.S., ASOS does not even see any clouds above 15k feet; how many times has a meteorologist been able to see cirrocumulus in the morning to foretell severe weather 12 hours later? Clues matter.
In regard to the referenced UKMO study, it appears that their “scientists,’ as well as all of today’s so-called climate scientists, need to study historical weather events to a much greater degree. This should be done in the course of their degree studies, but I don’t think much of that happens nowadays. A pity.

John Robertson
Reply to  4caster
August 5, 2017 11:18 am

Actually 4caster, you make a fine case for setting serial liars about the coming weather adrift in small boats sans paddle or sail.
If they are particularly arrogant, air dropped into the middle of an ocean might be necessary.
Such methods might focus their attention… but there again stupid is incurable.

4caster
August 5, 2017 11:48 am

John Robertson, I believe that most weather forecasters do the best job they can (although I do know a few who couldn’t care less), but the technology makes even them, their supervisors, and their administrators believe that there is nothing to be learned or gained from older methods and practices. True, there has been a quantum leap forward in the science, understanding, and technology in meteorology, but that does not mean we should throw out proven methods, even if they are considered old. Most forecasters don’t “lie,” but when your behind is in the hot seat and a forecast or a decision on whether a warning needs to be made, sound versus flawed decision-making can rapidly separate the men from the boys (or the women from the girls, as the case may be). But, experience is often a valuable resource on which to rely, and I feel that today’s forecasters do not have all the ammunition they could have, as the science has left behind some useable resources.
My opinion on many of the TV people I now see in the U.S. is that they are sorely lacking in the ability to analyze atmospheric data, and they rely solely on some private forecaster with little experience, who in turn relies solely on the NOAA/NWS, and that my or may not turn out so well. The not-so-recent trend toward employing eye-catching young women for on-camera work can backfire during rapidly evolving weather situations. They may have degrees, but their lack of experience is apparent. That’s not a knock against women, but it IS a knock against the hiring of any person for ratings purposes at the expense of the ability to cogently communicate needed and possibly vital weather information.

Vald
August 5, 2017 12:40 pm

That depends who is answering. If I was one of the engineers I’d say this supercomputer is the best thing to happen in our lifetime.

Gavin
August 5, 2017 1:52 pm

I live in SE England. About ten years ago we had a hot dry summer and endless BBC news reports featuring hand-wringing journos and Met Office ‘experts’ filmed in front of empty reservoirs telling us this was the shape of things to come, because… Global Warming. I’m old enough to realise that people who are as consistently wrong as the BBC and the Met Office are probably just talking pish.

Gary Pearse
August 5, 2017 2:12 pm

Assuming floods are random events, the number of records expected to be set in a period of N years would be:
Ln(N) [treating the first year in the series as a record]
From 1908 to 2015, the chain of precipitation records expected to be set would be:
Ln(108) ~5
In the article it is 4+3+2 =9, suggesting a degree of auto correlation but the declining number with subsequent 36 year periods shows a logarithmic character. For example, carrying on with our random approach, we would have to go three times as long, 324 yrs (including the first 108yrs) before we would get another record flood:
Ln(324) ~6
If they have flood records before 1908 that they were using, then our Ln calculation number would be a little bit closer to theirs, although not a lot closer. Example: if they had a record flood in medieval times, say anno 1015, then by 2015 we would have expected only:
Ln (1000) ~7 increasing records.
Because of auto correlation in cyclic climate patterns (AMO, etc) , there is likely bunching up of records with long intervening periods of lesser flooding activity. My prediction for UK flooding is we could break the flood record in SE England one more time before 2050, and possibly not again for a century or two later. I fear Dr Whitehouse was wrong too in his analysis, although I agree with him, the forecast could have been done in minutes with a pencil and paper.
It seems the 27m£ for a computer was to compensate for poor statistical skills. And who’s to say they even included AMO, etc in their computer calculation. Remember doom forecaster in chief Phil Jones of UEA admitted in the climategate emails he didn’t know how to use Excel!

michael hart
August 5, 2017 4:41 pm

$127 Million Climate Supercomputer No Better Than ‘Using A Piece Of Paper’

The price of toilet paper is scandalous these days.