We’ve routinely joked in the past about “playstation® Climatology”, a phrase coined by JunkScience.com a few years back in response to the constant barrage of model output from supercomputers worldwide that forecast doom and gloom ahead for the human condition if we don’t repent and stop our use of fire.
Well now, the Air Force’s Research Lab in Rome, NY actually went and made a PS3 based supercomputer.
![PlayStation3-Supercomputer[1]](http://wattsupwiththat.files.wordpress.com/2011/03/playstation3-supercomputer1.jpg?resize=640%2C426&quality=83)
The Air Force’s Research Lab in Rome, NY. has one of the cheapest supercomputers ever made, and best of all over 3,000 of your friends can play Tekken on it. The computer is made from 1,716 PlayStation 3s linked together, and is used to process images from spy planes. From the article: “The Air Force calls the souped-up PlayStations the Condor Supercomputer and says it is among the 40 fastest computers in the world. The Condor went online late last year, and it will likely change the way the Air Force and the Air National Guard watch things on the ground.
Here’s the systems before they were wired up:
Here’s what the Air Force says about the computing power:
53 TERAFLOPS Cell Broadband Engine (CBE) Cluster: This cluster
enabled early access to IBM’s CBE® chip technology included
in the low priced commodity PS3 gaming consoles. This is a
heterogeneous cluster with powerful sub-cluster head nodes. The
cluster is comprised of 14 sub-clusters, each with 24 PS3s, and
one server containing dual quad-core Xeon chips.
Full writeup here
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


FPGA divide and conquer …
http://www.mil-embedded.com/articles/id/?4724
J. Osmand;
@davidmhoffer: In turn, you’re right, but you’re still wrong. A lot of high perfomance computing these days is leveraging GPU resources, like nVidia’s CUDA, OpenCL, AMD’s Stream, and Microsoft’s DirectCompute>>>
All good examples, but of the code out there written for HPC, only about 1% has been modified to leverage GPU. There’s a LOT of custom code out there, way more than there is commerical software that an app vendor like Fluent etc can economicaly convert.
nor is the nVidia-based GPU in the PS3 capable of being utilized in that manner (too old). Supercomputing applications of the PS3 rely entirely on the PS3′s single dual-threaded IBM Power -based PPU and the six available SPU vector processors>>>
Which are based on… gaming technology…and were developed by…IBM? Sorta. Joint project called STI where S=Sony and T=Toshiba. The project was basicaly to produce a general purpose co-processor for the IBM Power chip that could do both multi-media and HPC, and IBM’s role was to marry the gaming technology of S and T with their processors. Technicaly not a GPU like an nVidia card I suppose, but pretty darn close.
I’m don’t follow gaming much so perhaps you could advise what future the SPU and Power chips will have in the PS3 at a consumer product level? I’m curious because without a consumer volume application for those chips, I’d think that IBM would lack the revenue to justify the R&D to keep pace with nVidia and AMD/ATI. The pSeries and nSeries volume just doesn’t cut it, they need a consumer play to build those kinds of numbers.
Is the supercomputer mentioned above really a new one?
From The Register (UK), 25th November 2009 13:19 GMT:
Assuming no units were fragged or not used, that supercomputer would have used 2536 PS3’s. The one above uses 1716 units. If they’re one and the same, what happened to the other 820 PS3’s? Spares? Christmas and birthday gifts?
That sort of stuff was already done years ago with discarded HP 95LX pocket computers, and of course the BOINC distributed computing network is there (please look it up, you might want to contribute.)
The ability to run other OSs was advertised as a feature by Sony when the PS3 was launched, and led directly to many innovative uses like this.
This is a problem for Sony, since the hardware costs are subsidised by software sales, so they lose money on hardware only sales like this and many others. So they tried to block the “other OS” feature with a firmware update. They lied about the reasons, claiming it was due to security holes.
So-called “hackers” then broke the blocks, and now Sony are trying to prevent users from doing, to their own hardware, what Sony said they could do – by suing them using the awful DMCA as grounds. Nice folk at Sony, aren’t they – sell you a machine, then sue you for using the machine in the way they advertised.
Note of course they aren’t suing the USAF – just some poor kid who posted the changes online. See http://www.groklaw.net for the full sorry tale.
Anthony,
Hmmm. I wonder if you could series a whole whack of memory sticks together?
🙂
Re wws says: March 23, 2011 at 8:30 pm
Aha, but there’s a fix for that. Just install Sony’s latest update and no more other OS or SkyNet. Somehow, I suspect the USAF has a workaround for that upgrade.
I live near Rome, NY and actually retired (active duty Air Force) from AFRL. I’m now a very ‘old’ PhD candidate trying to use Monte Carlo simulation and elements of Bayesian analysis to develop a climate and environmental contaminant approach to predicting the rate of atmospheric corrosion. My approach employs the use of nearly 55,000 hours of weather observations taken at six different locations in order to ‘train’ my model to provide the best fit for multiple locations with vastly different environmental conditions. I’ve already obtained approval from AFRL to use the PS3 supercomputer if our multicore servers here at work are incapable of solving the simulations in a reasonable amount of time.
@davidmhoffer
“All good examples, but of the code out there written for HPC, only about 1% has been modified to leverage GPU. There’s a LOT of custom code out there, way more than there is commerical software that an app vendor like Fluent etc can economicaly convert.”
I think most of the custom GPGPU code is being written with CUDA for nVidia hardware, as they were the first and have the most mature code and tools.
“Which are based on… gaming technology…and were developed by…IBM? Sorta. Joint project called STI where S=Sony and T=Toshiba. The project was basicaly to produce a general purpose co-processor for the IBM Power chip that could do both multi-media and HPC, and IBM’s role was to marry the gaming technology of S and T with their processors. Technicaly not a GPU like an nVidia card I suppose, but pretty darn close. ”
Well, it’s GPU-like in that it’s a lot more limited and focused than the typical general purpose processor, while GPUs have transitioned from being collections of dumb shader engines to being a lot more general purpose over the years. That said, the SPUs are still general purpose (though not very good at it), and can actually run code without assistance from other hardware, while GPUs are still pretty limited to what they can do (while being incredibly fast at it) and still require a coprocessor to feed them. Sony originally boasted that the PS3 was going to have two CELLS and no GPU at all, and the SPUs would handle all the graphics rendering, but it turns out that you can’t beat a couple of hundred dumb cheap shaders for that.
“I’m don’t follow gaming much so perhaps you could advise what future the SPU and Power chips will have in the PS3 at a consumer product level? I’m curious because without a consumer volume application for those chips, I’d think that IBM would lack the revenue to justify the R&D to keep pace with nVidia and AMD/ATI. The pSeries and nSeries volume just doesn’t cut it, they need a consumer play to build those kinds of numbers”
Outside of the PS3, none. Toshiba toyed with the idea of including CELL or the SPUs in some of the their TVs, and that was it for consumer applications. IBM went on to create an improved CELL, the PowerXCell 8i, and incorporated it into some of their blade solutions, one of which ended up powering the Roadrunner supercomputer which was at the top of the TOP500 Linpack list for a while. Another company also offers PCI-E expansion cards with the 8i as a co-processor. IBM was never interested in playing in the consumer GPU market, the wanted a HPC processor, and found a couple of willing suckers in Toshiba and Sony to help foot the bill. As far as the future goes, outside of Sony, I don’t think the CELL has much of a future, unfortunately. You can’t beat x86 or nVidia or AMD for price and performance at the consumer level, and IBM’s own Power systems have reincorporated bits and pieces of CELL and are too entrenched in the corporate and HPC space. Odds are Sony will throw a bit of money at IBM to modify and modernize the 8i for use in the eventual PS4, but that’s not guaranteed.
One thing that Jenson Hwang at NVIDIA got right, was the insatiable thirst for floating point cycles. The new Fermi-based Tesla boards are 600 GFlops right out of the box, double precision floating point. And the Chinese supercomputer is the king right now, although with the lowering of the cost of supercomputing, we can expect continued advances in usable code to calculate solutions to engineering, solid state physics, biochemical, and other problems. Unfortunately, getting the wrong answer faster won’t help the fools in “Climate Science” at all.
Here:
The rest: http://www.guardian.co.uk/environment/2008/apr/28/climatechange.scienceofclimatechange
Will the nonsense never end? Reading WUWT, one gets lulled into thinking that the global warming alarmists have been reduced to a small fringe movement of rent-seeking scientists and left-wing academics. It is easy to forget that the AGW dogma under the euphemism of ‘climate change’ is unchallenged gospel at the highest levels of government—even in the military!
Surely someone reading this blog knows this William Anderson, an Assistant Secretary of the Air Force, and can steer him here, where he may learn that worrying about the Air Force’s ‘carbon footprint’ is at best a fatuous exercise in political correctness, and at worst a serious misuse of the taxpayer’s dollar.
/Mr Lynn
All fine and good folks just don’t buy any of the new PS3s thinking you too can do this SONY has disabled the Other OS feature and is currently suing a person who jailbroke their lock out.
In fact there is a large class action lawsuit against SONY for doing the lock out.
Have they considered connecting Mac Minis with Apple’s new Thunderbolt high speed (10GBs) connector. It seems a ready made off the shelf way to construct image processing supercomputers.
davidmhoffer says:
March 23, 2011 at 9:36 pm
BarryW;
As I remember it, the playstations have vector processors in them which is what are used in supercomputers anyway.>>>
Oddly, you’re wrong but right (now anyway).
####
Umm, the playstation like all of the modern consoles use the IBM cell processor, a variation of the PowerPC. It is considered a vector processor. It is not particularly specialized for the gaming industry. This is the very same chip that IBM uses in ultra high performance servers. Not only that, but their blade is almost identical hardware wise. The OS to use is Yellowdog Linux. People have been putting together super computer from Playstations for quite a while. BTW, the cell is probably the best MPU available.
kadaka (KD Knoebel) says:
March 24, 2011 at 2:35 am
“Assuming no units were fragged or not used, that supercomputer would have used 2536 PS3′s. The one above uses 1716 units. If they’re one and the same, what happened to the other 820 PS3′s? Spares? Christmas and birthday gifts?”
In scalable systems the delivery of components is often staggered to fit with the anticipated rate of scaling. That way you don’t have a bunch of hardware you already paid for sitting around collecting dust and depreciating. That’s pretty standard “just in time” inventory control practice – take on ownership of capital goods exactly when you need it and if it’s shit that is going to be resold get it sold as soon possible. That’s called an “inventory turn” and the more inventory turns you can squeeze into a year the more efficient your business becomes. With the military the entire project cost of a multi-year is not all handed over by congress all once but in stages pretty much for the same reason. Money (including the capital cost of goods) should never be left laying around not doing anything. If it’s cash then bank it or invest it to earn interest and if it’s inventory use just-in-time delivery then sell it or put it to use as soon as possible. Whether money or equipment or labor letting it sit around doing nothing is wasteful.
And… if you want the legacy of this as I remember it, it was ATI and Pinnacle that did the first application that used the GPU to offload previous CPU stuff in Pinnacle Liquid Edition (a video editor) using the ATI 8500 chipset. The next step was demo’ed in Intel’s Developer Conference with Liquid as the PCI bus demo with Liquid running real-time effects renders in HD through the 16x PCI bus. At this point, I am not sure about which was the chicken and which was the egg, but the Cell processors started appearing on paper for consoles. Then the Berkely folks started talking about an ATI chipset based SETI Online (it was all ATI at the time as there were complaints about how nVidia as doing DirectX and the implementation did not translate). That then lead to a PlayStation discussion doing the same thing. And here we are.
Why did they use the older models? The newer ones use less power and are slimmer.
Sweet! Do you think they’d begrudge me a couple of hours to play Supreme Commander?
davidmhoffer says:
March 23, 2011 at 11:30 pm
Speaking of nVidia (I used to work pretty closely with them back in the mid/late 1990’s) I just go me a new cell phone – a Motorola Atrix – which just went on sale in the past few weeks and which won the highest award in its category at the last Consumer Electronics Show.
http://www.zdnet.com/blog/btl/breakthrough-device-of-ces-2011-motorola-atrix-phone-pc/43406
It’s powered by an nVidia 1ghz dual-core Tegra 2 processor with 1gig of ram. Android 2.2 O/S sitting on top of Ubuntu Linux. Motorola put in a root lock and doesn’t expose the Linux core except to a full-size instance of FireFox which is used for a Web Desktop (Webtop) whenever there’s an external hi-def display attached. The cretins as of now require you to purchase either their HD Multimedia Dock or their LapDock before the full version of Firefox will run. Hackers (bless their hearts) have already found a way to root it, enable tethering without purchasing a tethering contract, and to allow it run the full-size Firefox with a direct HDMI connection (no HD dock or lapdock) from the phone to the display. I’m not rooting mine quite yet as I thought I’d give Motorola a chance to regain their senses and take some of the more egregious locks off themselves. That said the Android O/S is still open and you can install any Android apps on it but you can’t remove some of the preinstalled gorp you may not want (cough cough Motoblur cough cough) which are loaded up in the locked down bootloader.
Still though, this thing is awesome and runs smooth as silk. Super high resolution 4″screen (960×540 pixels in 24bit color), touchscreen, almost 2 amp/hour lithium ion battery, front and rear 5mpx camera w/flash & autofocus, aGPS, digital compass, fingerprint scanner, wifi, bluetooth, 4G, speaker phone, USB & HDMI ports, proximity sensor, accelerometer, ambient light sensor, 16gb internal storage expandable to 48gb… and some I probably missed.
Amazing. A little heavy at 4.5 ounces but amazingly slim. I can’t believe how much hardware they’re packing into these things and with nothing but passive cooling. Just wow. I’d have liked a hard qwerty keboard but we’ll see how the virtual one works out after some practice with it.
Juice says:
March 24, 2011 at 8:49 am
Why did they use the older models? The newer ones use less power and are slimmer.
_____________________________________________________________
Because SONY locked out the Other OS option.
Read more here: http://www.groklaw.net/article.php?story=20110311112544990
and here (related Jail breaking suit) : http://www.groklaw.net/article.php?story=20110112115731533
Work on using massively parallel processors (MPP) has been going on for several years now. The motivation was to find a better method to speed up computations than simple vector processing. The old Cray super computers were fast but the real speed came from using vector processing. However, you could only take advantage of this with a limited number of problems, usually doing simple arithmetic with very large arrays. A considerable amount of creativity was required to rewrite programs to take any advantage of this capability. These programs were always written in FORTRAN. I doubt that you could effectively use it with a more modern language like C++. Also, it didn’t help that a Cray was hideously expensive.
For some years now I have argued that one of the real driving forces for developing small computer technology has been games. After all, my old TRS-80 (Anyone remember these?) had all the word processing capabilities that I need to this day. However the games were horrible.
Re Dave Springer on March 24, 2011 at 8:36 am:
Yeah, but this is the US military. Which will stockpile 1000 parts for a long-unused piece of equipment for decades, if one gets used or discarded then they will buy another 200 piece minimum order to maintain inventory, before finally scrapping or selling as surplus all of them when said piece of equipment is finally officially declared obsolete.
If the US military is now using “just in time” ordering, given the normal procurement delays and how “when it’s needed” can be RIGHT NOW, things really are worse than we thought.
Since no one has mentioned it yet, Stanford’s folding@home project has been incorporating consumer- owned PS3s as part of their massive distributed computing project since the advent of the PS3. Sony spent a lot of time and treasure on the effort, much to their credit.
PS3s currently contribute over 750 actual TFLOPS of the projects 5.4+ PFLOPS actual output. That’s right, 5.4+ PETAFLOPS.
Now that’s a supercomputer!
http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats
The advantage of using playstation systems is that they are 128 bit systems, and they are designed to simulate physics accurately. In fact, this supercomputer they’ve built is actually powerful enough, given the PhysX chips in them, to accurately simulate the entire solar system with accurate physics to the angstrom scale, provided their software is sufficiently efficient.
J. Osmand;
Thanks for the detailed reply!
“IBM was never interested in playing in the consumer GPU market, the wanted a HPC processor, and found a couple of willing suckers in Toshiba and Sony to help foot the bill. As far as the future goes, outside of Sony, I don’t think the CELL has much of a future, unfortunately”
I read the whole deal the same way. IBM as a corporation wants to play in the large scale HPC space. Going after a deal with 2,000 x86 servers makes their bid look pretty much identical to Dell’s, and HP’s, and etc, etc. Lowest price wins and everyone has almost the same cost structure. IBM wanted to get a competitive advantage tied to their proprietary Power chip set. To do that they needed both the gaming industry’s floating point technology and consumer level volume to make it economical.
Unfortunately that locks them into a compromise design that limits maximum performance for either purpose. Researcher’s who have large code bases writen for MPI will still have to modify their code to take advantage of vector processing, and for anyone doing work shared with other researchers at other institutions, they’re going to want something relatively portable. Going from Dell x86 Rockscluster to an HPx86 Rockscluster…piece of cake. From Dell x86 Rockscluster to an HPx86/nVidia Rockscluster… headache but manageable. From there to an IBM Power/CELL/AIX-Cluster…. Let’s just say if I’m writing the RFP it is going to have a clause in it regarding cost of code conversion and how the vendor is going to alleviate that.
On the other hand, I’m in sales, I only sell the darn things, some other poor sap has to make what I sold do what I said it would. But I have seen clauses like that appearing in RFP’s for large scale HPC configs. For small configs ($250K) IBM doesn’t even bid. For large configs, I don’t have any “purpose built” types of customers like this one for the airforce where it was designed to support only a very limited number of applications. Most of my large configs are factilities shared many researchers with many different code bases. I’ve sold 3 configs (lifetime) that hit the Top500. My first one debuted at 173 and slid off the list entirely in 18 months. My last one is still on the list at just a bit over 300. Probably gone entirely next list as they refresh every 6 months. Tells you just how fast the technology is moving!
But that said, I’ve not seen IBM competing for that business with their Power architecture at all in that scale range, they haven’t even tried in the general purpose HPC space as far as I can tell. They only seem to show up with that technology at institutions that have legacy AIX environments with lots of code tuned for it.
I’ll share this as well as an anecdote that those of us who live in the bleeding edge technology space ought to appreciate the irony of.
What is the biggest single issue that differentiates a succesful large scale deployment and a failure?
In my experience: The cooling system. All those Terraflops building models of the earth’s climate, or simulating air flow through a turbine, or crunching geological test data or what ever, the bleeding edge of technology applied to the bleeding edge of research, and the most common screw up the defeats the whole thing? Plumbing. The racks are most often cooled by doors attached to the rack that have fans pulling air through what amounts to a fancy radiator carrying liquid coolant to someplace outside the computer room. The number of (I’ll be kind) “inadequate” designs is eye brow raising, and the implementations are worse. I’m standing in the middle of an install one day surrounded by excited PhD researches and department heads when I hear someone say “ignore him, he’s just a plumber”. I instantly went to talk to the plumber. Sure enough, the schematics for the building showed the intake and outlet for the cooling system backward and he was trying to tell someone. I’ve saw an install once (not one of mine thank g-d) where every cooling door started leaking after 6 months. So they took them apart and had a plumber re-solder all the joints.
You’d think with all the modeling horsepower on line they would have seen that coming…