We’ve routinely joked in the past about “playstation® Climatology”, a phrase coined by JunkScience.com a few years back in response to the constant barrage of model output from supercomputers worldwide that forecast doom and gloom ahead for the human condition if we don’t repent and stop our use of fire.
Well now, the Air Force’s Research Lab in Rome, NY actually went and made a PS3 based supercomputer.
The Air Force’s Research Lab in Rome, NY. has one of the cheapest supercomputers ever made, and best of all over 3,000 of your friends can play Tekken on it. The computer is made from 1,716 PlayStation 3s linked together, and is used to process images from spy planes. From the article: “The Air Force calls the souped-up PlayStations the Condor Supercomputer and says it is among the 40 fastest computers in the world. The Condor went online late last year, and it will likely change the way the Air Force and the Air National Guard watch things on the ground.
Here’s the systems before they were wired up:
Here’s what the Air Force says about the computing power:
53 TERAFLOPS Cell Broadband Engine (CBE) Cluster: This cluster
enabled early access to IBM’s CBE® chip technology included
in the low priced commodity PS3 gaming consoles. This is a
heterogeneous cluster with powerful sub-cluster head nodes. The
cluster is comprised of 14 sub-clusters, each with 24 PS3s, and
one server containing dual quad-core Xeon chips.
Full writeup here
Nope, it’s not a joke in any sense of the word. As I remember it, the playstations have vector processors in them which is what are used in supercomputers anyway. Instead of a specially made computer, using off the shelf components makes a lot of sense. Shows what is really driving technology in that area now (games and video).
This kind of thing can be done for specific applications.
For example, number cruncher CPUs don’t necessarily have good I/O capabilities. For crunching cache usually is more important. It’s all about what you are trying to run on the system.
I love the hitech racks 🙂
Looks like the food service racks we modified for display use in our C-store, $79 at Costco.
Rumor has it that this supercomputer has been starting to link up with other playstations around the country, and has started to refer to itself as “SkyNet.”
US air force calls for mission to combat climate change
No kidding.
The before pictures are always more awesome than the after pictures. So tomorrow it’s gonna be a picture of a barfed upon TI-82 with a leaking battery. :p
Ironic though, considering the DDOS attack, is that US Air force, several years ago, had the idea that it could be wise to be able to do pre-emptive DDOS attacks when they found it to be necessary, even against friendlies. Maybe they’ll be able to, once they put all them (from 7 years of age) pieces together. :p
I worked at a couple of computer companies during their transition from single tasking to multi tasking operating systems. A peek at the interface code is bound to be interesting…
I already have the wire shelving for them, but will it run off of my 15A circuit in the living room?
Is it Move compatible?
Yeah but a bet the controllers have flat batteries.
People have been using Playstations for scientific computing for about four years. Video games do similar calculations to many scientific applications.
When the Imacs first came out I remember that someone at (I think it was) Berkely put together a cluster of 800 (or so) of the boxes off the shelf to make one of the premier academic supercomputers in the world, for about 1% of the price of any competing machines. Cost is the key here. One of the downsides to such machines is the lag in board-to-board communication. Whether this is a critical problem depends heavily on the kind of programming being used. If it is highly parallel or something that can be distributed and farmed out to slave processors then this is an ideal architecture. If it requires constant, arbitrary-host communication, then it fails badly. Image processing and partial differential equation solving, I think, works nicely on these units.
One thing I would have changed in the setup shown, is to remove the boxes altogether. Strip off any extraneous hardware, and add external cooling — a large fan at the end of each rack would extend the life of the machine and minimize problems. I presume they used high-end, shielded cables to reduce the need for error-correction. Optical communication would be even better, but this is not in evidence, else they would probably have ditched the boxes as I mention above. So my thought is that this is pretty good for a quickly tossed-together unit, but they could probably fine-tune it without too much overhead.
This was done a couple of years back by a university too
Now that’s thinking out of The Box. 😉
BarryW;
As I remember it, the playstations have vector processors in them which is what are used in supercomputers anyway.>>>
Oddly, you’re wrong but right (now anyway).
Vector processors pretty much went away as an approach to HPC (high performance computing) quite a few years ago. They couldn’t compete on performance or cost with stacks of general purpose servers and software tweaked to break workload up across all the servers.
Then along comes the gaming industry, with trillions of dollars at stake for the best whiz bang graphics. The game console makers sunk big dollars into purpose built chips to do the graphics calculations as fast as possible. The chips are useless as computer processors in the usual sense, but the computer processor can offload those graphics calcs to the GPU. The GPU does those calcs orders of magnitude faster than the cpu can.
And since the calcs we’re talking about are floating point…bingo! New way to turbo charge an HPC cluster. HOWEVER, the code the researchers wrote to run on 1700 servers won’t run on 1700 PS3’s without some modification. What kind of mods?
Well…if you’ve got some old vector processing programmers hanging around…they’ll pretty much know what to do.
This idea isn’t original; if I remember correctly, a group of young European geeks used the insane power of a half-dozen or so PS3s to definitively break one of the (admittedly lower-level) US DoD-approved cryptographic algorithms. (Anybody have a link?)
We’ve known for a long time that the humongous increases in speed of our graphics processors over the last decade — orders of magnitude more than increases in CPU power — have been driven exclusively by the gamers, not by the staid and serious design engineers sitting at their CAD drafting boards. So why should we be surprised when one group of geeks gets Linux to run on a PS3, and then another group adapts Linux’ vast interface riches to do serious work?
It’s a good thing they didn’t use Microsoft’s Xbox, or the whole military grid would crash every once in a while.
The Chinese made what they claim is the fastest computer in the world from PC graphics cards.
I love it.
A Super Computer for under 300.000 us dollar.
Very clever.
They’d want that room well ventilated – my PS3 overheats so much i can’t have it in the TV unit anymore. Especially playing Fallout 3 New Vegas.
J Felton says:
March 23, 2011 at 9:48 pm
It’s a good thing they didn’t use Microsoft’s Xbox, or the whole military grid would crash every once in a while.
Actually my PS3 crashes more than my Xbox 360 (the new models are much better than the old red ring of death models that first came out)
I wonder if they can replace these when one breaks-down?
Have Sony changed their policy on disabling other OS installs in future?
The question was pondered in Micromart (issue 1107 20-26 May 2010) in relation to this particular use.
I’ve forwarded a link to this article to the U.K. Met Office. That way they will only waste £200,000-odd of taxpayer’s money instead of the £30,000,000 + they are begging for.
@ R.Craigen: The PS3 depends on its case for cooling, and it’s cooling setup is more than adequate, considering they won’t be using any of the GPU capability of these machines. High-end or optical cabling isn’t necessary either, and retrofitting different networking capabilities into the machines would defeat the purpose of using cheap consumer parts in the first place. The PS3 uses gigabit Ethernet, which is probably plenty stable and fast for such a “small” application.
@davidmhoffer: In turn, you’re right, but you’re still wrong. A lot of high perfomance computing these days is leveraging GPU resources, like nVidia’s CUDA, OpenCL, AMD’s Stream, and Microsoft’s DirectCompute, but the PS3 hypervisor doesn’t permit user-installed operating systems to access the GPU, nor is the nVidia-based GPU in the PS3 capable of being utilized in that manner (too old). Supercomputing applications of the PS3 rely entirely on the PS3’s single dual-threaded IBM Power -based PPU and the six available SPU vector processors, aka the CELL Broadband Engine.
Ironically, it was this ability for the user to load their own operating system onto the PS3 that made it easier for hackers to try to break the system’s hypervisor open and gain access to the deeper and more sensitive parts of the system, which in turn led Sony to remove the capability to install other operating systems in the first place in later firmware updates, and which in turn “made” (air quotes intentional) the hackers respond by attacking the security of the PS3 even more vehemently, ending with the security of the PS3 being briefly rendered completely broken wide open.
FPGA divide and conquer …
http://www.mil-embedded.com/articles/id/?4724
J. Osmand;
@davidmhoffer: In turn, you’re right, but you’re still wrong. A lot of high perfomance computing these days is leveraging GPU resources, like nVidia’s CUDA, OpenCL, AMD’s Stream, and Microsoft’s DirectCompute>>>
All good examples, but of the code out there written for HPC, only about 1% has been modified to leverage GPU. There’s a LOT of custom code out there, way more than there is commerical software that an app vendor like Fluent etc can economicaly convert.
nor is the nVidia-based GPU in the PS3 capable of being utilized in that manner (too old). Supercomputing applications of the PS3 rely entirely on the PS3′s single dual-threaded IBM Power -based PPU and the six available SPU vector processors>>>
Which are based on… gaming technology…and were developed by…IBM? Sorta. Joint project called STI where S=Sony and T=Toshiba. The project was basicaly to produce a general purpose co-processor for the IBM Power chip that could do both multi-media and HPC, and IBM’s role was to marry the gaming technology of S and T with their processors. Technicaly not a GPU like an nVidia card I suppose, but pretty darn close.
I’m don’t follow gaming much so perhaps you could advise what future the SPU and Power chips will have in the PS3 at a consumer product level? I’m curious because without a consumer volume application for those chips, I’d think that IBM would lack the revenue to justify the R&D to keep pace with nVidia and AMD/ATI. The pSeries and nSeries volume just doesn’t cut it, they need a consumer play to build those kinds of numbers.
Is the supercomputer mentioned above really a new one?
From The Register (UK), 25th November 2009 13:19 GMT:
Assuming no units were fragged or not used, that supercomputer would have used 2536 PS3’s. The one above uses 1716 units. If they’re one and the same, what happened to the other 820 PS3’s? Spares? Christmas and birthday gifts?
That sort of stuff was already done years ago with discarded HP 95LX pocket computers, and of course the BOINC distributed computing network is there (please look it up, you might want to contribute.)
The ability to run other OSs was advertised as a feature by Sony when the PS3 was launched, and led directly to many innovative uses like this.
This is a problem for Sony, since the hardware costs are subsidised by software sales, so they lose money on hardware only sales like this and many others. So they tried to block the “other OS” feature with a firmware update. They lied about the reasons, claiming it was due to security holes.
So-called “hackers” then broke the blocks, and now Sony are trying to prevent users from doing, to their own hardware, what Sony said they could do – by suing them using the awful DMCA as grounds. Nice folk at Sony, aren’t they – sell you a machine, then sue you for using the machine in the way they advertised.
Note of course they aren’t suing the USAF – just some poor kid who posted the changes online. See http://www.groklaw.net for the full sorry tale.
Anthony,
Hmmm. I wonder if you could series a whole whack of memory sticks together?
🙂
Re wws says: March 23, 2011 at 8:30 pm
Aha, but there’s a fix for that. Just install Sony’s latest update and no more other OS or SkyNet. Somehow, I suspect the USAF has a workaround for that upgrade.
I live near Rome, NY and actually retired (active duty Air Force) from AFRL. I’m now a very ‘old’ PhD candidate trying to use Monte Carlo simulation and elements of Bayesian analysis to develop a climate and environmental contaminant approach to predicting the rate of atmospheric corrosion. My approach employs the use of nearly 55,000 hours of weather observations taken at six different locations in order to ‘train’ my model to provide the best fit for multiple locations with vastly different environmental conditions. I’ve already obtained approval from AFRL to use the PS3 supercomputer if our multicore servers here at work are incapable of solving the simulations in a reasonable amount of time.
@davidmhoffer
“All good examples, but of the code out there written for HPC, only about 1% has been modified to leverage GPU. There’s a LOT of custom code out there, way more than there is commerical software that an app vendor like Fluent etc can economicaly convert.”
I think most of the custom GPGPU code is being written with CUDA for nVidia hardware, as they were the first and have the most mature code and tools.
“Which are based on… gaming technology…and were developed by…IBM? Sorta. Joint project called STI where S=Sony and T=Toshiba. The project was basicaly to produce a general purpose co-processor for the IBM Power chip that could do both multi-media and HPC, and IBM’s role was to marry the gaming technology of S and T with their processors. Technicaly not a GPU like an nVidia card I suppose, but pretty darn close. ”
Well, it’s GPU-like in that it’s a lot more limited and focused than the typical general purpose processor, while GPUs have transitioned from being collections of dumb shader engines to being a lot more general purpose over the years. That said, the SPUs are still general purpose (though not very good at it), and can actually run code without assistance from other hardware, while GPUs are still pretty limited to what they can do (while being incredibly fast at it) and still require a coprocessor to feed them. Sony originally boasted that the PS3 was going to have two CELLS and no GPU at all, and the SPUs would handle all the graphics rendering, but it turns out that you can’t beat a couple of hundred dumb cheap shaders for that.
“I’m don’t follow gaming much so perhaps you could advise what future the SPU and Power chips will have in the PS3 at a consumer product level? I’m curious because without a consumer volume application for those chips, I’d think that IBM would lack the revenue to justify the R&D to keep pace with nVidia and AMD/ATI. The pSeries and nSeries volume just doesn’t cut it, they need a consumer play to build those kinds of numbers”
Outside of the PS3, none. Toshiba toyed with the idea of including CELL or the SPUs in some of the their TVs, and that was it for consumer applications. IBM went on to create an improved CELL, the PowerXCell 8i, and incorporated it into some of their blade solutions, one of which ended up powering the Roadrunner supercomputer which was at the top of the TOP500 Linpack list for a while. Another company also offers PCI-E expansion cards with the 8i as a co-processor. IBM was never interested in playing in the consumer GPU market, the wanted a HPC processor, and found a couple of willing suckers in Toshiba and Sony to help foot the bill. As far as the future goes, outside of Sony, I don’t think the CELL has much of a future, unfortunately. You can’t beat x86 or nVidia or AMD for price and performance at the consumer level, and IBM’s own Power systems have reincorporated bits and pieces of CELL and are too entrenched in the corporate and HPC space. Odds are Sony will throw a bit of money at IBM to modify and modernize the 8i for use in the eventual PS4, but that’s not guaranteed.
One thing that Jenson Hwang at NVIDIA got right, was the insatiable thirst for floating point cycles. The new Fermi-based Tesla boards are 600 GFlops right out of the box, double precision floating point. And the Chinese supercomputer is the king right now, although with the lowering of the cost of supercomputing, we can expect continued advances in usable code to calculate solutions to engineering, solid state physics, biochemical, and other problems. Unfortunately, getting the wrong answer faster won’t help the fools in “Climate Science” at all.
Here:
The rest: http://www.guardian.co.uk/environment/2008/apr/28/climatechange.scienceofclimatechange
Will the nonsense never end? Reading WUWT, one gets lulled into thinking that the global warming alarmists have been reduced to a small fringe movement of rent-seeking scientists and left-wing academics. It is easy to forget that the AGW dogma under the euphemism of ‘climate change’ is unchallenged gospel at the highest levels of government—even in the military!
Surely someone reading this blog knows this William Anderson, an Assistant Secretary of the Air Force, and can steer him here, where he may learn that worrying about the Air Force’s ‘carbon footprint’ is at best a fatuous exercise in political correctness, and at worst a serious misuse of the taxpayer’s dollar.
/Mr Lynn
All fine and good folks just don’t buy any of the new PS3s thinking you too can do this SONY has disabled the Other OS feature and is currently suing a person who jailbroke their lock out.
In fact there is a large class action lawsuit against SONY for doing the lock out.
Have they considered connecting Mac Minis with Apple’s new Thunderbolt high speed (10GBs) connector. It seems a ready made off the shelf way to construct image processing supercomputers.
davidmhoffer says:
March 23, 2011 at 9:36 pm
BarryW;
As I remember it, the playstations have vector processors in them which is what are used in supercomputers anyway.>>>
Oddly, you’re wrong but right (now anyway).
####
Umm, the playstation like all of the modern consoles use the IBM cell processor, a variation of the PowerPC. It is considered a vector processor. It is not particularly specialized for the gaming industry. This is the very same chip that IBM uses in ultra high performance servers. Not only that, but their blade is almost identical hardware wise. The OS to use is Yellowdog Linux. People have been putting together super computer from Playstations for quite a while. BTW, the cell is probably the best MPU available.
kadaka (KD Knoebel) says:
March 24, 2011 at 2:35 am
“Assuming no units were fragged or not used, that supercomputer would have used 2536 PS3′s. The one above uses 1716 units. If they’re one and the same, what happened to the other 820 PS3′s? Spares? Christmas and birthday gifts?”
In scalable systems the delivery of components is often staggered to fit with the anticipated rate of scaling. That way you don’t have a bunch of hardware you already paid for sitting around collecting dust and depreciating. That’s pretty standard “just in time” inventory control practice – take on ownership of capital goods exactly when you need it and if it’s shit that is going to be resold get it sold as soon possible. That’s called an “inventory turn” and the more inventory turns you can squeeze into a year the more efficient your business becomes. With the military the entire project cost of a multi-year is not all handed over by congress all once but in stages pretty much for the same reason. Money (including the capital cost of goods) should never be left laying around not doing anything. If it’s cash then bank it or invest it to earn interest and if it’s inventory use just-in-time delivery then sell it or put it to use as soon as possible. Whether money or equipment or labor letting it sit around doing nothing is wasteful.
And… if you want the legacy of this as I remember it, it was ATI and Pinnacle that did the first application that used the GPU to offload previous CPU stuff in Pinnacle Liquid Edition (a video editor) using the ATI 8500 chipset. The next step was demo’ed in Intel’s Developer Conference with Liquid as the PCI bus demo with Liquid running real-time effects renders in HD through the 16x PCI bus. At this point, I am not sure about which was the chicken and which was the egg, but the Cell processors started appearing on paper for consoles. Then the Berkely folks started talking about an ATI chipset based SETI Online (it was all ATI at the time as there were complaints about how nVidia as doing DirectX and the implementation did not translate). That then lead to a PlayStation discussion doing the same thing. And here we are.
Why did they use the older models? The newer ones use less power and are slimmer.
Sweet! Do you think they’d begrudge me a couple of hours to play Supreme Commander?
davidmhoffer says:
March 23, 2011 at 11:30 pm
Speaking of nVidia (I used to work pretty closely with them back in the mid/late 1990’s) I just go me a new cell phone – a Motorola Atrix – which just went on sale in the past few weeks and which won the highest award in its category at the last Consumer Electronics Show.
http://www.zdnet.com/blog/btl/breakthrough-device-of-ces-2011-motorola-atrix-phone-pc/43406
It’s powered by an nVidia 1ghz dual-core Tegra 2 processor with 1gig of ram. Android 2.2 O/S sitting on top of Ubuntu Linux. Motorola put in a root lock and doesn’t expose the Linux core except to a full-size instance of FireFox which is used for a Web Desktop (Webtop) whenever there’s an external hi-def display attached. The cretins as of now require you to purchase either their HD Multimedia Dock or their LapDock before the full version of Firefox will run. Hackers (bless their hearts) have already found a way to root it, enable tethering without purchasing a tethering contract, and to allow it run the full-size Firefox with a direct HDMI connection (no HD dock or lapdock) from the phone to the display. I’m not rooting mine quite yet as I thought I’d give Motorola a chance to regain their senses and take some of the more egregious locks off themselves. That said the Android O/S is still open and you can install any Android apps on it but you can’t remove some of the preinstalled gorp you may not want (cough cough Motoblur cough cough) which are loaded up in the locked down bootloader.
Still though, this thing is awesome and runs smooth as silk. Super high resolution 4″screen (960×540 pixels in 24bit color), touchscreen, almost 2 amp/hour lithium ion battery, front and rear 5mpx camera w/flash & autofocus, aGPS, digital compass, fingerprint scanner, wifi, bluetooth, 4G, speaker phone, USB & HDMI ports, proximity sensor, accelerometer, ambient light sensor, 16gb internal storage expandable to 48gb… and some I probably missed.
Amazing. A little heavy at 4.5 ounces but amazingly slim. I can’t believe how much hardware they’re packing into these things and with nothing but passive cooling. Just wow. I’d have liked a hard qwerty keboard but we’ll see how the virtual one works out after some practice with it.
Juice says:
March 24, 2011 at 8:49 am
Why did they use the older models? The newer ones use less power and are slimmer.
_____________________________________________________________
Because SONY locked out the Other OS option.
Read more here: http://www.groklaw.net/article.php?story=20110311112544990
and here (related Jail breaking suit) : http://www.groklaw.net/article.php?story=20110112115731533
Work on using massively parallel processors (MPP) has been going on for several years now. The motivation was to find a better method to speed up computations than simple vector processing. The old Cray super computers were fast but the real speed came from using vector processing. However, you could only take advantage of this with a limited number of problems, usually doing simple arithmetic with very large arrays. A considerable amount of creativity was required to rewrite programs to take any advantage of this capability. These programs were always written in FORTRAN. I doubt that you could effectively use it with a more modern language like C++. Also, it didn’t help that a Cray was hideously expensive.
For some years now I have argued that one of the real driving forces for developing small computer technology has been games. After all, my old TRS-80 (Anyone remember these?) had all the word processing capabilities that I need to this day. However the games were horrible.
Re Dave Springer on March 24, 2011 at 8:36 am:
Yeah, but this is the US military. Which will stockpile 1000 parts for a long-unused piece of equipment for decades, if one gets used or discarded then they will buy another 200 piece minimum order to maintain inventory, before finally scrapping or selling as surplus all of them when said piece of equipment is finally officially declared obsolete.
If the US military is now using “just in time” ordering, given the normal procurement delays and how “when it’s needed” can be RIGHT NOW, things really are worse than we thought.
Since no one has mentioned it yet, Stanford’s folding@home project has been incorporating consumer- owned PS3s as part of their massive distributed computing project since the advent of the PS3. Sony spent a lot of time and treasure on the effort, much to their credit.
PS3s currently contribute over 750 actual TFLOPS of the projects 5.4+ PFLOPS actual output. That’s right, 5.4+ PETAFLOPS.
Now that’s a supercomputer!
http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats
The advantage of using playstation systems is that they are 128 bit systems, and they are designed to simulate physics accurately. In fact, this supercomputer they’ve built is actually powerful enough, given the PhysX chips in them, to accurately simulate the entire solar system with accurate physics to the angstrom scale, provided their software is sufficiently efficient.
J. Osmand;
Thanks for the detailed reply!
“IBM was never interested in playing in the consumer GPU market, the wanted a HPC processor, and found a couple of willing suckers in Toshiba and Sony to help foot the bill. As far as the future goes, outside of Sony, I don’t think the CELL has much of a future, unfortunately”
I read the whole deal the same way. IBM as a corporation wants to play in the large scale HPC space. Going after a deal with 2,000 x86 servers makes their bid look pretty much identical to Dell’s, and HP’s, and etc, etc. Lowest price wins and everyone has almost the same cost structure. IBM wanted to get a competitive advantage tied to their proprietary Power chip set. To do that they needed both the gaming industry’s floating point technology and consumer level volume to make it economical.
Unfortunately that locks them into a compromise design that limits maximum performance for either purpose. Researcher’s who have large code bases writen for MPI will still have to modify their code to take advantage of vector processing, and for anyone doing work shared with other researchers at other institutions, they’re going to want something relatively portable. Going from Dell x86 Rockscluster to an HPx86 Rockscluster…piece of cake. From Dell x86 Rockscluster to an HPx86/nVidia Rockscluster… headache but manageable. From there to an IBM Power/CELL/AIX-Cluster…. Let’s just say if I’m writing the RFP it is going to have a clause in it regarding cost of code conversion and how the vendor is going to alleviate that.
On the other hand, I’m in sales, I only sell the darn things, some other poor sap has to make what I sold do what I said it would. But I have seen clauses like that appearing in RFP’s for large scale HPC configs. For small configs ($250K) IBM doesn’t even bid. For large configs, I don’t have any “purpose built” types of customers like this one for the airforce where it was designed to support only a very limited number of applications. Most of my large configs are factilities shared many researchers with many different code bases. I’ve sold 3 configs (lifetime) that hit the Top500. My first one debuted at 173 and slid off the list entirely in 18 months. My last one is still on the list at just a bit over 300. Probably gone entirely next list as they refresh every 6 months. Tells you just how fast the technology is moving!
But that said, I’ve not seen IBM competing for that business with their Power architecture at all in that scale range, they haven’t even tried in the general purpose HPC space as far as I can tell. They only seem to show up with that technology at institutions that have legacy AIX environments with lots of code tuned for it.
I’ll share this as well as an anecdote that those of us who live in the bleeding edge technology space ought to appreciate the irony of.
What is the biggest single issue that differentiates a succesful large scale deployment and a failure?
In my experience: The cooling system. All those Terraflops building models of the earth’s climate, or simulating air flow through a turbine, or crunching geological test data or what ever, the bleeding edge of technology applied to the bleeding edge of research, and the most common screw up the defeats the whole thing? Plumbing. The racks are most often cooled by doors attached to the rack that have fans pulling air through what amounts to a fancy radiator carrying liquid coolant to someplace outside the computer room. The number of (I’ll be kind) “inadequate” designs is eye brow raising, and the implementations are worse. I’m standing in the middle of an install one day surrounded by excited PhD researches and department heads when I hear someone say “ignore him, he’s just a plumber”. I instantly went to talk to the plumber. Sure enough, the schematics for the building showed the intake and outlet for the cooling system backward and he was trying to tell someone. I’ve saw an install once (not one of mine thank g-d) where every cooling door started leaking after 6 months. So they took them apart and had a plumber re-solder all the joints.
You’d think with all the modeling horsepower on line they would have seen that coming…
“Condor”, huh?
Sure hope it runs longer than 3 days…
Must be for applications that do not require much inter-processor communications. That is limited by light speed so cable length and synchronization issues are critical.
Still, it is probably great for image processing.
Wow! Interesting. 51 years ago this month I walked into the control room of the largest combined computer systems in the world at the time (we were told). Dual computers of 50,000 vacuum tubes each, one on each side of the room, that were always talking to each other as one part of the US Air Force’s SAGE (Semi-Automatic Ground Environment Air Defense system. THAT was the start of a wonderful career, but not in the AF. Together they maybe had 1 megaflop. 🙂 But they did the job. Go Air Force!
^ Milwaukee Bob @ 1618
Forty years ago, I recall walking into the SAGE building (as a wide-eyed kid) at the USAF base in Duluth, MN at which my father was a computer maintenance technician. One literally walked into the beast. To my young imagination it was like being on board the star-ship Enterprise.
Have often wondered why Christopher Monckton so often mentions X-Box-360s when referring to the computer models.
While I thought he was just being post ironic , perhaps there’s more to it than we realised.
The military mum on Wife Swap should be shown this article. She argued against here temporary son about playing video games saying they had no benefits. Obvioulsy they do.
Poor Yorek says:
March 24, 2011 at 5:01 pm
I was close, Truax AFB Madison WI
One literally walked into the beast.
Very true! The core memory unit was 4k and stood 6′ tall and about 3’x3′ square. You could look inside it and see the magnetic doughnuts “hanging” on the x, y & z wires.
and those massive control consoles were used years later in various movies…
I remember this from a couple of years ago and at the time, it was said, that more and more supercomputers would be built this way. Hopefully it’s relevant.
Here’s a blurb from IBM at: http://www.research.ibm.com/cell/
The Cell Architecture grew from a challenge posed by Sony and Toshiba to provide power-efficient and cost-effective high-performance processing for a wide range of applications, including the most demanding consumer appliance: game consoles. Cell – also known as the Cell Broadband Engine Architecture (CBEA) – is an innovative solution whose design was based on the analysis of a broad range of workloads in areas such as cryptography, graphics transform and lighting, physics, fast-Fourier transforms (FFT), matrix operations, and scientific workloads. As an example of innovation that ensures the clients’ success, a team from IBM Research joined forces with teams from IBM Systems Technology Group, Sony and Toshiba, to lead the development of a novel architecture that represents a breakthrough in performance for consumer applications. IBM Research participated throughout the entire development of the architecture, its implementation and its software enablement, ensuring the timely and efficient application of novel ideas and technology into a product that solves real challenges.
wws says:
March 23, 2011 at 8:30 pm
🙂
The big screen at the front is where the demonic face appears.
Imagine that?!
So, while my kids and grand-kids aren’t online (late in the middle of the night), millions of these things could easily be chained together via their already in place internet connections, and be used to solve any number of very important computational tasks, near instantly? That’s an awesome thought!
On the other hand, if my kids and grand kids had ever used cheap bungee cords to secure their fairly expensive (in relative terms) devices, as the AF appears to have done here, I would ground them for a week! And the thought of grounding the USAF, even for one minute, is irony indeed ;-]
Oh, great!
Now Playstations, Nintendos, XBoxes, personal computers, and more will be added to EAR/ITAR lists of the bureaucracy.
They even have food mixing equipment and cordless phones listed (since cordless phones use spread spectrum technology, the basic versions of which were invented by Hedy Lamar three-quarters of a century ago).
ITAR = International Traffic in Arms rules, EAR is exports restricted for security reasons.
No substitute for stopping evil at its source, instead of trying to run a Bar Lev Line.
(The Bar Lev Line was Israel’s successful attempt to prove the old adage about not learning from history. In one of the official wars against Israel the Egyptian Army got around the Bar Lev Line faster than Germany got around France’s Maginot Line in WWII.)
What is your GFlop per unit cost when cost is zero?
http://www.extremelinux.info/stonesoup/
Ah, the “good old days” of Beowulf clusters… I made one “just for fun” with 6 or 8 nodes (it varied). At present, I’ve still got 2 of them (one running GIStemp… I ‘played up’ that it was on an old 400 Mhz AMD chip box, but in fact I’d done the port to one node of my 8 node Beowulf thinking if I needed extra “umph” I just add nodes, and never needed more than one. It being pretty stupid code…)
Oh, and the AF is using the Mark I “Bread Rack”… I’ve used them before too…
http://beowulf.org/
Oh, and the old Cray ran C just fine. We installed the “first ever” release of UNICOS, the UNIX OS on a Cray, at Apple in about ’86 IIRC. Ran C just fine. It would vectorize OK, but not nearly as well as the FORTRAN (something about staff-centuries of opitmizing in the FORTRAN compiler…). It was “helpful” to have a loop counter of “64” (called a “stride”) as then your problem got broken into bits that exactly filled the vector units… So you would put a “for I = 1 to 64” loop in the middle of things if possible and an outer loop to do it as many times as needed, then the stuff inside the loop (like “A=B*C” ) would get done in the vector units in groups of 64 per instruction cycle.
Sometimes I miss my old Cray… Had a 2 TB STK “Tape Robot” on it that you could open a door and go inside… about $42 Million all told, CPU & Robot IIRC.
At any rate, as I can now buy a 2 TB (and much faster) disk for about $100 and a faster computer for about $400, I don’t miss it THAT much 😉
Had a 750 kVA power feed and about a 16 x 16 foot “water tower” with a 4 inch chilled water line for cooling … power bill was more than my house cost… hmmm… maybe I don’t miss it …
This is the ultimate irony in view of the criticism of the “playstation” mentality of pilots killing people with drones on the other side of the world.
Please, credit where credit is due; she had help:
Per http://en.wikipedia.org/wiki/Hedy_Lamarr#Frequency-hopping_spread-spectrum_invention
Sidenote: I’ve not reviewed the merits of the awarded patent to see if the ‘claims’ of SS technology are true, and if principles and concepts thought of then are actually used today …
.
Ah, an “ARM architecture processor” per: http://en.wikipedia.org/wiki/Nvidia_Tegra
(ex-TI GaAs Facility (now TriQuint owned I think) alum here …)
.
Wait until they start ‘trading securities and commodities‘ – oh wait, they already are:
http://www.zerohedge.com/article/visualizing-todays-hft-market-stick-save?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+zerohedge%2Ffeed+%28zero+hedge+-+on+a+long+enough+timeline%2C+the+survival+rate+for+everyone+drops+to+zero%29
HFT or High Frequency Trading – http://en.wikipedia.org/wiki/High-frequency_trading
No more open outcry pit trading, with ‘trading floors’ having been replaced by electronic trading systems Globex – http://en.wikipedia.org/wiki/Globex
Open outcry – http://en.wikipedia.org/wiki/Open_outcry
.
Perhaps overshadowing the accomplishments of Hedy Lamarr and her associate (or vice versa) were the developers of the SIGSALY (also known as Green Hornet) secure speech system used in World War II for the highest-level Allied communications:
SIGSALY/Wiki – WW2 secure speech system
Sigsaly/NSA – Sigsaly Story
Sigsaly/NSA – The Start of the Digital Revolution
Notable is the use of ‘noise values’ as the encryption key stored on a phonograph record. The record would be duplicated, with records (1 each) being distributed to SIGSALY systems on both ends of a conversation … _this_ is used in DSSS (Direct Sequence Spread Spectrum) systems (of which Qualcomm CDMA and CDMA-derived cellular systems) and would represent the Walsh ‘pseudo-random’ noise sequence used today to ‘spread’ a direct-sequence spread-spectrum (DSSS) signals to/from subscriber and base-station transceivers.
.
With all this reminiscing, I feel like everyone should be using taglines at the end of their posts…
Bruce
– Never be fooled by a kiss, or let a fool kiss you