Harnessing Infinity: The Promise of Quantum Computing

Computer circuit board and cd rom

Guest essay by Eric Worrall

The Register has published a fascinating video about Quantum Computing, an interview with D-Wave, a company which manufactures what they claim are quantum computing systems.

According to The Register;

It turns out that there are three broad categories of problem where your best bet is a quantum computer. The first is a Monte Carlo type simulation, the second is machine learning, and the third is optimization problems that would drive a regular computer nuts – or, at least, take a long time for it to process.

An example of this type of optimization problem is this: Consider the approximately 2,000 professional hockey players in North America. Your task is to select the very best starting line-up from that roster of guys.

There are a lot of variables to consider. First there’s all the individual stats, like how well they score, pass, and defend. But since hockey is a team sport, you also have to consider how well they work when combined with other specific players. When you start adding variables like this, the problem gets exponentially more difficult to solve.

But it’s right up the alley of a quantum computer. A D Wave system would consider all of the possible solutions at the same time, then collapse down to the optimal set of player. It’s more complicated than I’m making out, of course, but it’s a good layman-like example.

Read more: http://www.theregister.co.uk/2016/04/18/d_wave_demystifies_quantum_computing/

The following is the video of the interview;

A note of caution. Quantum computing is still in its infancy. There is substantial skepticism expressed in some quarters, about what is happening inside the box, about whether the D-Wave system offers a performance advantage over conventional computers.

For example;

A team of quantum-computing experts in the US and Switzerland has published a paper in Science that casts doubt over the ability of the D-Wave Two quantum processor to perform certain computational tasks. The paper, which first appeared as a preprint earlier this year, concludes that the processor – built by the controversial Canadian firm D-Wave Systems – offers no advantage over a conventional computer when it is used to solve a benchmark computing problem.

While the researchers say that their results do not rule out the possibility that the processor can outperform conventional computers when solving other classes of problems, their work does suggest that evaluating the performance of a quantum computer could be a much trickier task than previously thought. D-Wave has responded by saying that the wrong benchmark problem was used to evaluate its processor, while the US–Swiss team now intends to do more experiments using different benchmarks.

Read more: http://physicsworld.com/cws/article/news/2014/jun/20/is-d-wave-quantum-computer-actually-a-quantum-computer

The abstract of the paper;

The development of small-scale quantum devices raises the question of how to fairly assess and detect quantum speedup. Here, we show how to define and measure quantum speedup and how to avoid pitfalls that might mask or fake such a speedup. We illustrate our discussion with data from tests run on a D-Wave Two device with up to 503 qubits. By using random spin glass instances as a benchmark, we found no evidence of quantum speedup when the entire data set is considered and obtained inconclusive results when comparing subsets of instances on an instance-by-instance basis. Our results do not rule out the possibility of speedup for other classes of problems and illustrate the subtle nature of the quantum speedup question.

Read more: http://science.sciencemag.org/content/345/6195/420

Quantum computing in my opinion is a goal worth pursuing. Even if the D-Wave system does not fulfil its promise, this will hardly be the end of the Quantum Computing effort. The goal, of harnessing almost unimaginable computation power, of being able to solve problems which simply can’t be tackled with conventional computers, is simply too attractive to abandon.

156 thoughts on “Harnessing Infinity: The Promise of Quantum Computing

    • The “C” language and processors specifically designed to run it have set back computing and software development significantly. I’m a big fan of zero operand instructions and stacks. Because no computer does 2 + 2 At the hardware level you must do 2, 2, + because you can’t perform an operation without all the operands. So your compiler must translate. A LOT. That makes them complicated and quirky. With no two having the same quirks.

      • Aren’t you simply arguing for algebraic over reverse Polish notation ??
        It’s the old HP versus TI battle.
        I’ve always used reverse Polish calculators, and have found it much preferable to me.
        g

      • Great app called Droid48 that simulates an HP 48G/GX or a 48S/SX on your android phone. If you have a large phone, like a Samsung Note4 or Note5, it works great. Brings back nostalgic memories of college. You can then reverse-polish until you wear out your fingers. I still remember programming my 48GX… using the stack made for simple and compact programs… superior to any TI at the time.

      • Computers don’t do either. They take two operands and use hardware to perform a function on them.

      • Processors don’t run C code they run machine code. C code is compiled into assembly code and assembly code is converted into machine code.

    • Even a perfect design can only take you so far.
      You need both.
      From a business standpoint, sometimes it’s better to invest in a faster processor than it is to invest the development time to make your code a little more perfect.

      • True. My complaint is that Climate Science™ cries that their badly designed Global Circulation Models would be fine if they just had more computing power.

  1. There is another use for it. We need a climate policy fraud detection system. There are a great many variables at play and differing data and model qualities. The bias aspect is a fairly simple computational component though. It needs to include money and greed as parameters at a minimum.

  2. I’ve followed AI since the 70ties and QC the past 10 years. They both appeared feasible, but eventually it still seems to me it is like trying to get to the moon by climbing a tree; at first you believe you are making pretty good progress …

    • Well said. I first studied AI 30 years ago and they said it was just about to break through (programming in LISP etc.). The mistake they make is they reduce intelligence to just logic. Intuition, emotion, bias, whims, irrationality etc are all an integral part of intelligence too. I doubt they’ve had much progress with those aspects.

  3. Well there are all kinds of special purpose computers that solve only a limited number of problems well compared to other ways of solving that problem.
    With Fourier Transform optical systems you can do millions of Fourier transforms live and in real time, all at the same time. This can convert a map of arbitrary spatial information (picture) into a 2D map of a spatial frequency spectrum. You can then apply simple spatial filters to remove or enhance or otherwise modify some frequencies and not others, and then a similar optical system can perform millions of simultaneous inverse Laplace transforms, and recreate the original map (picture) with some frequencies absent or modified in some ways. It would take terra-flops of conventional processing to do what a laser and a few lenses can do in real time.
    So QC has yet to prove its value
    g

      • I think I wrote ” lenses “, well actually ” a few lenses “.
        So ergo, I must mean lenses, and NOT something else.
        G

    • The article stated:
      A D Wave system would consider all of the possible solutions at the same time,
      And George E stated:
      you can do millions of Fourier transforms live and in real time, all at the same time.
      The only computer that I know of that is capable of doing several different things “all at the same time” …… are the biological computers commonly referred to as the “human brain”.

    • I was under the impression that the main advantage of quantum computing was size. You pack a 4 inch cube with Josephson junctions, immerse it in liquid hydrogen and voila, you have your entire supercomputer (except for all the interfaces).

  4. Hi Eric –
    After living through the 80’s, 90’s and part of the Naughties in the computing industry, my reaction to “quantum computing” and it’s promise is lukewarm to say the very least. I do have some popular science level understanding of the proposal which has, frankly, left me cold.
    Parallel processing is not new. A hardware architecture that actually uses quantum effects to store and process information (q-bits) doesn’t exist to the best of my knowledge and perhaps never will. Indeterminacy of state isn’t something normally pursued in the computer sciences, nor can I imagine it being of any particular use in that application.
    I very much suspect the current batch of commercial “quantum” computing solutions are software based rather than hardware, and the software isn’t revolutionary. In the 70’s there was quite a bit of attention given to what we used to call “single assignment languages”. These were computer language compilers designed to work in congress with mildly specialized hardware to enable parallel processing by an array of CPUs that shared a special type of memory, which allowed the machine to detect when a write operation had been performed to a particular memory location (address). The executable code was available to all processors in the array and each processor blocked until the data it needed to proceed became available (the location was written to).
    This technique allowed for what was then called “massively parallel” computation. It wasn’t cost effective at the time and it had problems. Those problems were never (at the time) addressed because Moore’s Law was in effect back then and it simply didn’t make economic sense to pursue.
    Moore’s Law has reached an apex and is now in decline. We can no longer expect a doubling of speed/density every 18 months simply due to advances in physical fabrication processes. Now we turn to software, and single assignment languages/massively parallel architectures are coming to the front again.
    Quantum Computing, like Artificial Intelligence and Expert Systems, is another techno-babble buzzword suitable for attracting venture capital. Nothing more. Does it have promise? Yes. Is it Quantum? No.

    • ” Indeterminacy of state isn’t something normally pursued in the computer sciences, nor can I imagine it being of any particular use in that application.”
      Indeed, a certain large computer company with a 3 letter name spent many tens of millions of dollars on Josphenson Junction technology (possibly quite speedy).
      Then they discovered that a on/off switch based on said technology came up “I just don’t know if I’m ON or OFF) a few percent of the time.
      Awful hard to perform any useful computing if the machine does not know which bits are on or off.
      Cheers KevinK

    • Hear, hear.
      Bottom line, if you want to show “quantum computing” is faster, you need to show, mathematically, how the number of steps is reduced, not just imagine some magical parallelism that exists without specificity.
      Here’s an example of real world “I can do something faster”: https://en.wikipedia.org/wiki/Carry-lookahead_adder
      You can increase speed of computation by reducing the number of steps, or improving the speed of each step -> “q-bits” do neither. There simply is no magical information to be found that isn’t expressed in boolean logic with ones and zeros.

    • Actually they are trying to solve traveling salesman type problems, where the overall answer is an accumulation of numerous not yes or no answers.
      The only way to do this in current computers is to iterate all possible paths, in theory QC it will do many at once.
      As for other hardware, main stream (ie Intel) processors are hard to beat, they are at the front of the technology because they sell so many processors, and then pair that with the floating point capability of video processing cards, I think a pretty modest amount of money you can have some pretty fricking powerful computing power.
      At least compared to past super computers.

      • No you do NOT have to check all possible paths. Look up “branch and bound” for example. Also there’s a whole lot of theory about approximation algorithms and randomised algorithms, where you may be able to get near enough, or to be wrong with practically insignificant probability.

    • Bartleby said:

      Quantum Computing, like Artificial Intelligence and Expert Systems, is another techno-babble buzzword suitable for attracting venture capital. Nothing more.

      I concur, ….. 100%.

    • D-Wave was challenged in 2014 on somewhat questionable grounds.
      The computer works. You can feed it problems, it sets up a scientific experiment based on computational configuration and then the quantum computer calculates a result. Analysis of the experiment results show it solved problems such that it found answers and the answers were the kind of answers expected of a quantum computer. So, the debate is ended if the D-Wave is actually doing quantum calculations or is some elaborate hoax with say software simulation going on somewhere inside.
      The criticism was that the time to do the calculation compared to hard coded solutions with conventional computers. This is a 1st generation quantum computer from D-wave with 5 years of development behind it that was tested against 60 year 60th generation linear computers and in some cases it was substantially faster than a hard coded competitor. In many cases it was slower or the same. Since these experiments were carried out D-Wave has introduced another generation computer with 2 times as many qubits. Since quantum computers grow in performance exponentially with qubits in 5 or 10 years a single quantum computer may have greater computation capability for certain problems than the entire computing capacity of all computers made today.
      Will quantum computing replace von neuman computing? No. Are the types of problems amenable to quantum processing worth solving faster? Yes.

  5. So based on JT’s description “much more uncertainty = more information.” Interesting, very interesting, I guess the whole premise of my Doctoral Dissertation is just a sham, good thing Trudeau wasn’t on my committee, (of course he would have been 17 at the time, but of course,Doogie Howser started his residency at 15).

    • Well he did tell Canadians that they needed to “rethink elements as basic as time and space” during his election run.

    • Well arguably, white Gaussian noise, being infinitely unpredictable; in the sense that no future state can be predicted from an infinite string of previous states. is 100 % information (about the state of the system.)
      G
      So what did YOU dissertate on the subject ?

  6. Nothing new. Stargate called it a “Z-point module”.
    (OOPS! My bad. This was about “computing power”, not “power storage (batteries)”. Maybe chuckle but otherwise, please ignore this comment.8-)

  7. Since the beginning of the information age, software engineers have struggled with keeping up with the advancements of hardware and so advancements have been artificially stunted. A prime example is the video game industry where the advancements in hardware are literally withheld so as to give game developers a chance to release a game for their chosen platform which can take years if not some times a decade such as Final Fantasy 15.
    The point I’m trying to make is even with advent of quantum computing, the sad reality is, software engineers will always be leagues behind hardware engineers.

    • Yes Dog, I agree. My brother is a video game programmer and he developed a pilot for a ufo game using curved polygons when that came out 15 or more years ago. It looked great and required much less computing power, but everyone was already too invested in flat triangular polygons to change.

    • Well when software mistakes are left in place, rather than fixed, and an endless layer of band aids is plastered on top, then it is no wonder software is so inefficient.
      I can’t even run my M$ Windows computer full time, because it keeps using more than half its time downloading and installing new ” patches “; before I can use it to do anything.
      g

      • Having misspent some of my youth doing software testing and configuration control back before most of folks now alive were born, I have some sympathy for MS, Apple, et al. But frankly I don’t think it’s possible to patch modern software with it’s vast attack surfaces and overwhelming complexity into something safe and reliable.
        I gotta tell you. At the rate of 15 or so bug fixes a week it going to take a while to squash 10^god_alone_knows_what_power software bugs. And, of course, people make mistakes, so they’re almost for sure going to brick your computer a number of times.
        Welcome to the Internet of Horrors. Enjoy your visit.
        If your usage permits it, you might give some thought to one network connected computer for everyday stuff and a second, non-network connected computer for real work. No, I’m not kidding.

    • Oh, I don’t know if that is universally true. I’m getting a dual nVidia K-80 machine in tomorrow specifically because the dual K-10 machine is not powerful enough to do what I need to do without heavy sacrifices in performance (signal to noise ratio performance). The software I wrote 4 years ago to solve my problem is largely unchanged simply because the algorithms I use have no reason to be changed (efficiency issues aside).
      The bottleneck I almost always face is data throughput: the processor is stalled because of the time it takes to load data into the pipeline. Quantum computing will not solve this any better than an improvement in software/compilation will. Certainly you can make all of your storage fast static RAM (what caches use), but it’s expensive and power hungry (heat kills, too).
      There was much fanfare about memristors at some point, IIRC.

      • Mark — You’re right of course. But, to the very limited extent that I understand QC, processing massive amounts of data is pretty much exactly the opposite of what QC is supposed to be good at. What QC, if it can be done, should excel at is performing incredible numbers of simple computations in parallel and somehow — it’s never been clear to me exactly how — delivering up the best solution. For example it should presumably be able to try all 2**1024 possible 1024 bit keys to a sample of encrypted text and deliver up the key that produces something readable.
        Since it sometimes seems that nothing involving quantum mechanics is too far fetched to be impossible, maybe QC will someday be able to do exactly that. But I think it might be a while before every hacker in the world can use their $17 quantum box to access anybody’s bank accounts, communications, and digital door locks.

    • The job of a software engineer is to get the most out of the current hardware. If hardware is held back it’s a financial decision on how to profit from the waves of obsolescence. Consider the price structure of Intel chips, a small change in performance is supposedly worth a huge step in price.

  8. Quantum computing … should become a reality just after the first cold fusion reactor goes on line.
    I’ll believe it when I see it!

  9. ..OMG..If the Quantum Mechanics Computer finds it hard to pick the best hockey player in the NHL, then it is only good for scrap !

  10. The problem with a computer that ‘solves’ problems that no other computer can, is that you may not be able to verify that the answer is right. I realize in this post normal science world – feeling it is right may be enough, but it is nice to have verification.

    • Some problems such as cracking cryptographic cyphers, one of the proposed uses for Quantum Computing, are easy to check once you have the solution – it is finding the solution which is difficult.

      • That’s correct, and of course decryption is the Holy Grail when it comes to peddling your QC box for BIG bucks and retiring to cruise the South Seas on your palatial yacht. Furthermore, for decryption, it probably doesn’t matter all that much if the correct answer leaks out during computation and isn’t there when the computer condenses to it’s solution (I think leak and condense are the terms they use). You can always try again and see if the next run generates a usable answer.

      • Unfortunately, I believe investing in this has less to do with retiring to the South Seas than it does with the South Sea Company of old England.

    • This has nothing to do with feeling or post-normal science. In many cases it is much easier to check whether a solution is correct than it is to find the solution.

  11. They’re fantasy. #1 translating information from reality to quantum: doesn’t work. #2 all at the same time but with actual output? We have that, its called congress/parliament – its why we have single speaker rules. #3 how do you program a quantum computer when adding data to the program changes what the program means?

  12. D wave is not a quantum computer, it is designed to do some things very fast. Praising d wave for fast processing of specific tasks is like praising a batting machine for batting a high average.
    Reminds me of the Deep Blue beating that Chess player, Deep Blue was fed the knowledge of three chess masters. It didn’t “think”, it simply ran down the numbers weighed against the input of chess knowledge in a form that software could use.
    AI and quantum computing are far behind the hype

  13. D-Wave uses adiabatic quantum systems, which do not give you the immense speed-up for certain types of calculations that a system of entangled q-bits would give.
    Shor’s algorithm, in theory, would be able to crack the vast majority of today’s cryptographic systems, including RSA, but so far nobody has been able to create one containing more than the 10 q-bits necessary to factor the number 21. Every additional q-bit is much harder than the next, and nobody has managed to scale it up to the thousands needed to crack today’s cryptographic messages.
    D-Wave isn’t attempting to do that. There are other things that D-Wave’s “quantum” computers may be better at than classical computers, but speed is not one of them.

  14. QC, neural networks, fuzzy logic, AI, always touted with much fan fare at first only to fade away (usually).
    Encoding lots of info into a quantum is only half of the problem, you also need to decode it somehow to get the info back out.
    AI has come around the “breakthrough” loop about three times in my career.
    One of the electrical engineering trade newspapers (EE Times) always did an April’s Fool day front cover.
    Spoof of the year circa ~1980 was: IBM’s groundbreaking work in “Artificial Stupidity”. The spoof goes on to praise IBM; “Leave it to IBM to go where no other tech companies are going to break new ground”. And fantastic performance specs; IBM’s “ASS” (Artificial Stupidity System, of course) could calculate the wrong answer 3 million times faster than a human…..
    Another famous spoof was the first WOM – Write Only Memory IC with 1000 times higher density than any other memory chip… This one was a real hoot with a great faux data sheet, one of the charts was “Pins Remaining versus Number of Insertions”. And of course the time delay to readback data was… you guessed it – infinity…
    Fuzzy Logic was all the rage back circa 1990, it could do everything better. One of the examples touted was how a fuzzy logic controller could make an electric heater warm up to max temp faster than using a “old fashioned” thermostat. The only thing that determines how fast a heater can warm up is the power supply, ain’t nothing the controller can do about it.
    So at this point I would rate QC as another fad, but you never know. Just don’t waste my tax dollars on it please.
    And the thought of a politician explaining technology is a real hoot.
    The current POTUS a few years back explained how bailing out a US car company by giving it to a European car company got us access to the “Technology of Small Cars”…
    What technology pray tell ? The big secret is…. all the parts are smaller…… Like making 13 inch wheels is a HUGE technological leap forward from making 16 inch wheels….
    Cheers, KevinK
    (Old EE Times spoofs paraphrased from memory)

    • 13 inch wheels is a huge technological leap from 16 inch wheels, because it reduces the unsprung mass significantly..
      G

    • After a little research I believe the spoof headline from about 1980 was more like;
      “While everyone else pursues Artificial Intelligence leave it to IBM to make new breakthroughs in Artificial Stupidity”.
      And a simple $10 fan with an Off/Low/Medium/Fast speed selector switch is “technically” an example of Fuzzy Logic. While an old fashioned fan with an On/Off switch is “technically” not Fuzzy Logic.
      Mostly buzzwords…

      • Please, no. Fuzzy logic is ‘is the state 0, 0.1, 0.2, 0.3…1.0 ?…not: is the state ‘0’ or ‘1’?
        I.e. its not binary, its weighted logic.
        Giving it a silly name opened it up to disrespect, but its a valid approach to some problems.

  15. With quantum computing will we will be able to, without opening up the computer, find out whether the cat is alive or dead?

  16. If you were allowed to open up the D-Wave box you would find some old fart programmer writing in Assembler.

    • > If you were allowed to open up the D-Wave box you would find some old fart programmer writing in Assembler.
      Or possibly a really pissed off cat.

      • “Real old farts write machine code.”
        I knew a bloke once who used to input the boot loader of his Altair in binary via the DIP switches from memory…

    • One of Rockwell’s old ICBM programmers told me that in the missile crisis of ’63 the nearest the US couple place a nuke was about 150 miles, give or take, from Moscow. The claims for precise targeting were hubris. I asked why and was told the basic reason was the ICBM had only 4k of memory. The entire targeting system had to fit into 4k of RAM including the star recognition, geolocation, steering and aiming systems. The earth is not round, gravity varies, there is wind…
      How did they do it at all? Everything was written in machine language! Not a single bit was wasted. Efficiency was all. A dedicated processor with dedicated no-fat software can run far faster than something generic.
      The quantum computer the Perimeter showed at their tenth anniversary public bash in Waterloo had 8 QBits. I asked how many times they had to read the state of the atom to get a definite answer. 30,000,000 times.
      It could multiple 2*3 and add 2. It did it slowly, relatively speaking. So if in fifty years they completely solve the ‘state’ problem it will be 30,000,000 times faster, at least. Hopefully it won’t get the wrong answer(s).
      Loved the WOM story. (Write Only Memory) . Know some people like that.

      • It like the CVOS. A buddy of mine worked for a company that did process monitoring sensors and such, and for a particular customer he had done a lot of work to prove that once their process was tweaked in it was remarkably stable. Management, undeterred, decided they needed an additional sensor so they could close the process loop and get even better performance.
        Convinced it was a waste of time, but being told he must produce said sensor he came up with the CVOS – the constant value output sensor. Just needs a 9V battery and it’s good to go, no adjustment required. All the engineers in the room about died trying not to laugh while the management was all ‘this is great!’. I still chuckle ever time I think about it.

      • I’ve seen some that was written in write-only memory, because nobody could read it afterwards. Mostly Perl or APL, with the occasional TECO.

  17. Quantum computers could greatly accelerate machine learning

    (Phys.org)—For the first time, physicists have performed machine learning on a photonic quantum computer, demonstrating that quantum computers may be able to exponentially speed up the rate at which certain machine learning tasks are performed—in some cases, reducing the time from hundreds of thousands of years to mere seconds. The new method takes advantage of quantum entanglement, in which two or more objects are so strongly related that paradoxical effects often arise since a measurement on one object instantaneously affects the other. Here, quantum entanglement provides a very fast way to classify vectors into one of two categories, a task that is at the core of machine learning.

    http://phys.org/news/2015-03-quantum-greatly-machine.html

  18. I started programming in 1967, PDP-8/i.. QC, if memory serves, appeared in the 70’s and would make everything else obsolete “Real Soon”, so fast that it would have the answer before you asked the question. AI was about the same, it would even ask the question. Just around the corner. Now, old and gray, still waiting.
    And Steve, don’t knock Assembler. Takes longer to write, but if well done, faster and smaller than anything else. Low power, small embedded systems.

    • I stated in about 1974 using “APL” (A Programming Language) on an old “DEC Writer” (combination typewriter / sprocket hole paper printer) connected via a 300 (or maybe a 150 baud ?) acoustic modem. This talked to a computer 20 miles away.
      First program I wrote; Blackjack (the card game). Whoo Hooo, heady times, I was one of about 3 students in my high school in the “computer science” group.
      Yeah, AI, Flying cars, Bases on Mars, QC, lots of stuff “just around the corner”, still waiting to see just exactly where that “corner” is.
      I did PDP-8, 8008, VAX assembler, some of the First DSP assembler code (TI and Analog Devices). Fun stuff, you could “feel” the bits moving around in the computer, some of the tools actually showed live updates of the bits changing in all the registers of the computer, sometimes you had 5 or maybe even 6 registers to watch.
      In the old days we only had ones and zeros, and sometimes we ran out of zeros and had to use “O’s”, ha ha ha…
      For the younger folks;
      APL was a “symbolic” programming language with no memory or complier, you had to type in all you instructions one line at a time and try executing the program. If you typed in an incorrect instruction you had to “dump the memory” and start over by typing in all the instructions again from scratch. Very crude.
      A DEC Writer was a keyboard and a one line at a time printer, you typed in a line of text and hit Carriage Return (the old fashioned version of “enter”) then you waited a few seconds to see if the computer liked what you wrote, if it was happy it would print out a single line of text on the paper in front of you and advance the paper so you could see what you typed, then wait for your next instruction. Very crude.
      An acoustic modem was a really weird device, back then there was a wire (yes a wire) from the wall of the room you where in to a phone. One of those really odd old phones that was huge and had two big round ends connected by a plastic piece. Back then you would pick up the phone, dial a number, wait for a strange squealing noise and then stuff the round ends of your phone into two big rubber suction cups. I know, unbelievable isn’t it. And if it all worked you could type 20 or maybe even 40 characters a second on your keyboard then wait a minute or two for a response.
      Then we moved all the way forward to color displays right in your hand with the computer inside always ready to connect to anyone anyplace…. And stored programs with a 100 different versions of Blackjack with a dozen color schemes to select from.
      And now after all this progress we can finally (using Twitter ™) type in a single line of text, then hit “enter” and wait a while to see if anybody answers……
      Ah, progress…
      Cheers, KevinK

      • Oh, I forgot, after all this progress in science nobody but nobody can tell me the average temperature one week from now at my location to much better than +/- 5 degrees F.

      • KevinK
        “Yeah, AI, Flying cars, Bases on Mars, QC, lots of stuff “just around the corner”, still waiting to see just exactly where that “corner” is.”
        You forgot to mention ‘renewable energy that will compete on cost with nuclear and coal-fired power plants’.

      • APL had memory. There were workspaces with commands like )SAVE ws. When you defined a function every character was saved and could be edited with the built in editor. Even on an IBM 1130. The interpreter could outperform better known compiled languages, and several compilers existed. It’s still in use.

    • You old farts would be jealous of the system I get to write software for. 😉
      Full disclosure: I actually SAW a card reader when I was 13, visiting what would be my high school later that year.

      • I moved on to punched cards as an input device for a batch processing system at college.
        At least then you did not have to type in all the lines all over again to fix a simple typo. But woe on you if you dropped you “deck” and had to sort them all out.
        I believe it was a “Boroughs Welcome” mainframe computer ? Or maybe a Honeywell ? Always with a number after the name, I think we had a 7700 ?
        For a while there was quite a number of mainframe computer manufacturers.

      • But woe on you if you dropped you “deck” and had to sort them all out.
        I dropped a deck once. Worse than having to sort it out was that the machine that punched the cards was old and no longer printed the statement at the top of the card. So I came up with the brilliant idea of running the deck thru the reader and then matching the cards to the printout so I could sort them from there.
        Completely by chance, that was my first infinite loop. With a page feed to the printer in the middle of it….

      • > At least then you did not have to type in all the lines all over again to fix a simple typo. But woe on you if you dropped you “deck” and had to sort them all out.
        Most software that used punch card input allowed you to put a sequence number in cols 73-80 of the card. Reordering a dropped or otherwise mangled deck was just a matter of 8 passes through a card sorter.
        BTW, if you’re looking for a REALLY obnoxious input medium, don’t overlook paper tape. Not only was it fragile, it came in several different numbers of holes per column and in a variety of colors — some of which became transparent enough to confuse optical readers when wet or oily.
        Dreadful stuff.

      • My second start at programming was much better than my first.
        But it required the old computer card dance; write the code, enter the code onto cards, run the deck and submit the program, come back tomorrow for the output.
        Only all of the card machines were beginning to fall apart at Penn State’s Ogontz campus.
        The machine for punching cards didn’t print on them.
        So after creating your card deck we used a second machine that didn’t punch anymore to enter the print lines.
        Then we got to read the cards to check our typing.
        Yeah, spilling a deck was a downer and sometimes a good thing as sorting cards took your mind off other problems.
        Another problem was getting caught in a rainstorm and getting cards wet. Damp cards with thumbed edges shouldn’t be fed into card readers; the technicians don’t like you much then. Especially if there is a waiting line of impatient students
        Moving to a community college with a brand new fully outfitted IBM mainframe, (system/370 4341) including a multitude of color terminals and even a room of IBM 286 PC-terminals with user configurable color.
        No waiting for output, results were returned to our screen and we printed programs and output on demand rather than every time.
        CMS operating system rather than the typical TSO, but very similar. An on demand TSO editor that seemed a miracle in those days.
        Then again, I view many of today’s code editors as being minor miracles for inputting and reading code. What can I say, I’m easily spoiled.

      • “Don K April 20, 2016 at 12:13 am
        BTW, if you’re looking for a REALLY obnoxious input medium, don’t overlook paper tape.”
        It had it’s place such as in milling machines for repetitive machining work and worked quite well in my experience. This was before computer numeric control (CNC) systems.

      • First computer I touched: IBM 360, Waterloo U – high school math class day visit.
        First program developed from scratch: Fortran 4, calculated n! It took me most of the day. Punch cards of course and a wi-ide carriage line printer.
        I asked for assistance from a former student of my HS, enrolled in pure math. After a while scratching her head, she abandoned me saying, “We don’t speak the same language!”

      • System 360/370 were SNA architecture platforms. A 4341 I don’t think was Sys/370, Sys/360 probabaly. My first 4381 mainframe with 48MB of RAM. My mibile in my “skyrocket” has 1000 times more RAM.

    • Who you calling “old fart”? I resemble that remark!
      Assembler was and is the way to fly. Higher level languages are just ways for people who can’t handle assembler to write code and we have been increasing computer speed and memory ever since they were invented in order to compensate for the bad/inefficient code people write. Assembler separated the men from the boys because you couldn’t “fake” it in assembler.
      And yes, code IS self documenting .

      • Yeah, the code became self documenting when assembly phrases read like English to the mind.
        IBM BAL Assembly here.
        As satisfying as it is to write a solid Assembly program is still sweet to write a chunk of good code in many higher level languages.
        Those forays Wills has taken us one using R language are very interesting.
        I spent too many years coding in FOCUS and C with TSO snippets for db allocations to ignore them.
        Though many times, I cursed what the code must’ve compiled to and went over my program seeking algorithms that were more explicit.
        As my COBOL teacher told us day in and day out: Always code explicitly! Implicit code means that you are trusting some other bored programmer to make decisions for you.

      • A big part of the problem is that programmers have to write code that will run on many different machines these days. Add to that new processors coming out daily, it’s no longer worth the effort to become good assembly language.
        High quality optimizers these days can compete with all but the most expert of assembler writers, and do so in a small fraction of the time.
        My first two jobs were in 6805 and 80186 assembler.

    • And Steve, don’t knock Assembler. Takes longer to write, but if well done, faster and smaller than anything else. Low power, small embedded systems.
      Not necessarily true.
      1/. FORTH was always more compact than assembler, and was ideal for very small RAM/ROM environments. Of course you can write FORTH programs in assembler I suppose.
      2/. It depends on the code and how its written. IIRC Gary Kildall as a student was given some huge chunk of assembler to fix, He rewrote it in FORTRAN and fixed its bugs. It was smaller and ran ten times faster.
      3/. It depends on the compiler. Years ago I made a living writing embedded code in C and assembler., Assembler was faster. More recently writing for *86 in C, I looked at the code as assembler. I couldn’t have written tighter code than that. C optimisation and efficient use of registers makes it very hard to compete using assembler.

      • Forth more compact and faster? Only with a poor Assembly programmer. After all, compilers produce Assembly (machine code) so at best they could equal a good assembly programmer. Certainly faster to program in high level languages, so in most cases, far more practical. And darn few of us old geezer machine code types. Probably helped that memory was $1/byte back then. 4K cost $4000. (and much bigger dollars than now) So we learned to write tight code. I do not have a photographic memory, I documented the daylights out of my code in case I had to come back and modify it later. My boss, a COBOL programmer was impressed. He could read what I wrote (IBM 360/30 BAL Rah!) I was writing “Structured” programs long before Yourdan invented them. Also self defense.

      • To be a really great “assembler programmer” one has to know exactly how the “processor chip” functions ….. so that they can “design” their program to take advantage of all of said “chips” attributes.
        An assembler programmer, ….. (meaning the “assembler” converts each coded “instruction” directly to machine language (binary code), …… tells the “processor chip” exactly what to do and when to do it And it don’t get any better, quicker or more efficient than that.
        One has to be an “original thinker” to be a really good “assembler” programmer.
        Cheers, ….. from an ole computer designing/programming dinosaur. HA, I’se remember back when a 4 micro-second read-write memory chip was “faster than quick”.

  19. I’ll believe in quantum computing when there’s a fusion-powered space elevator taking us to LEO.

  20. It’s 1890. Wild-eyed inventors working in garages in France, the US, the UK and Germany are predicting that their radical new machines could provide motive power for personal carriages and wagons – without steam. The real wackos were predicting that with such machines may enable men to fly through the air like birds.
    Tom Watson, founder of IBM famously predicated that the world demand for electronic computers would be five, max. Ken Olsen, founder of Digital Equipment famously threw Steve Jobs out of his office for claiming that ordinary people would want to have computers in their homes.
    We ignore possible futures at our own risk.
    The interesting question is – what if? For this, we need science fiction.
    On another thread it was pointed out that our greatest peril is the arrival of an errant bolus from space. The classic exploration of this possibility is Jerry Pournelle’s and Larry Niven’s “Lucifer’s Hammer” in which a comet that is rated a near miss breaks up rounding the sun and the pieces disrupt our comfortable life here on terra (was) firma.
    For quantum computing, the current classic is Mather Mather’s “The Complete Atopia Chronicles.” One of the first problems the D-Wave folks point to is the sports team problem. Imagine if Google and Facebook were to combine all of their info on each of us, our history, likes and dislikes, locations, activities and friends and feed it all into QC – Oh wait, who are the investors in D-Wave?
    As the book describes, one result is Pfuture News – the news before it happens, to a high statistical probability. “With great powers, they said, came strange responsibilities.”
    The good news (depending on your view of the future), is that quantum computing is developing in a highly competitive entrepreneurial environment. It will survive based on its ability to solve real problems in the real world, not by government fiat or personal assertion (are you listening MM?).
    I can’t resist another quote, about the advantages of turning control of your physical body over to an AI bot (proxxi) so that you can do the really “important” stuff, like virtual sex, etc.
    “Imagine performing more at work while being there less. Want to get in shape? Your new proxxi can take you for a run while you relax by the pool!” she exclaimed, stopping her walk to look directly into each viewer’s eyes. “Look how you want, when you want, where you want, and live longer doing it. Create the reality you need right now with Atopian pssionics. Sign up soon for zero cost!”

  21. The great quantum computing scam. Sound familiar? This is just another fraud by “scientists” to raise money from a generous and long-suffering public by waving big words around. If a thing violates all common sense, then it’s the thing that’s invariably at fault, not common sense. Into the hocus pocus bucket with this.

  22. There’s only two options really. D-Wave is wrong in stating that there machine is a quantum computer. The other alternate is that Quantum Theory is wrong. Pick one and then file a proposal for grant funding.

  23. This topic was breached a couple of years ago on a forum I participate in :
    https://channel9.msdn.com/Forums/Coffeehouse/flops-Super-computers-and-Quantum-Computing
    Some of my comments back then:
    Jan 20, 2014 at 7:25PM
    “While D-Wave’s hardware is better at dealing with structured code, it runs neck-and-neck with the “fake” system when tackling random problems. ”
    http://www.engadget.com/2014/01/20/google-tests-the-performace-limits-of-d-wave-quantum-computers/
    – – –
    Dec 27, 2013 at 4:10PM
    A company near me sold a quantum computer to Lockheed Martin.
    http://www.theglobeandmail.com/news/british-columbia/bc-firms-superfast-quantum-computer-upgrade-attracts-lockheed-martin/article10825838/
    It’s hard to determine what is meant by speed compared to a conventional computer. Is it compared to a single transistor? Quantum state means it has all possible values, so how do you program a quantum computer ?
    — – –
    Energy consumption is an important factor in super computer usage. Its an additional cost that must be considered for each program run.
    There are two main obstacles that need to be overcome for future improvements :
    Memory: http://www.theregister.co.uk/2013/09/14/intel_exascale_update_deepdive/
    And interconnect: … http://insidehpc.com/2013/11/15/doe-awards-25-4-million-exascale-interconnect-design/

  24. I remember IBM bought the Inmos transputer in the 80’s and I saw it demonstrated at IBM’s development lab in Winchester, UK. I was impressed. It was claimed to be the future of computing. Although we have symmetric multiprocessing (SMP), multiple cores, 64bit architecture and oodles of RAM even on laptops, such as the one I am using right now, etc these days, it is nothing like what the transputer was. I don’t think the transputer ever got in to production outside the lab.

    • The Transputer made it into limited production, and was not all that impressive. 4 cores, not all that fast. The I7-Quad is getting a bit old, but could blow the doors off of anything prior.

      • From memory, the demo unit had 8 “cores” with a 9th to control communications between all the others.

  25. Ya gotta love it.

    “It turns out that there are three broad categories of problem where your best bet is a quantum computer. The first is a Monte Carlo type simulation, the second is machine learning, and the third is optimization problems that would drive a regular computer nuts – or, at least, take a long time for it to process…”

    Only the computer they’re describing does nothing new in computing for these problems. Sheer hardware miniaturization is not automatically more intelligent computing, just faster.
    Whenever one looks to nature for these problems are solved now, demonstrates that neural cells grow and interconnect to better assimilate and process the new data and problems.
    What is also likely, yet I’ve never heard of any such discovery is that surely nature has developed or implemented quantum processing somewhere somehow. As with so many other mechanical problems, nature has had the time, the materials and nearly infinite opportunity to develop it. Those little bugs, germs, planarian, insects and even birds and squirrels that seem to be much smarter than their brain sizes.
    Look on the web for methods to keep squirrels out of bird feeders. There are some hilarious videos out there where squirrels run the gamut and still get the bird seed.

  26. No one imagined having a device that could fit in your shirt pocket that combined a Star Trek communicator and Tri-corder into one handy device that can fit into you shirt pocket. No one imagined that device would become ubiquitous across all of society. Not even Steve Jobs imagined this when he was watching Star Trek in his youth or even when working on his Next computer. Steve Jobs failed in packaging and marketing technology with the Next computer but rode a new wave of technology advancements and became wildly successful in packaging and marketing those advancements as a consumer product.
    Technology advances and application is chaotic and unpredictable (kind of like weather and the climate). Quantum computing could change how civilization works as cell-phones have, or be a niche product, or a big zero like the vacations we once imagined we could take on lunar colony resorts by the year 2000.

    • It all depends on the actual technology breakthroughs.
      Mobile technology depends on ultra high transistor packing density, lithium ion batteries, LCD screens and gigahertz radio wave technology.
      A vacation on the moon depends on massive access to cheap energy, or some kind of quantum tunnelling teleportation.

      • With a space elevator on both the earth and the moon, you could dramatically reduce the amount of energy required.
        Of course that would require developing the technology to build space elevators. Sigh.

  27. The main problems with Climate Science are poor models and grossly inadequate data coverage. Quantum Computers are not going to help with this.

    • Au Contraire. QC may be able to get blatantly incorrect answers in orders of magnitude less time. We’ll have 10000 models that don’t make correct predictions. 10000 is 100 times better than 100, right?

    • More computer horsepower would enable a reduction in grid size.
      Smaller grids would mean some of the things that currently have to be parameterized could be modeled instead.
      Of course the answers would still be wrong, but they may not be quite as wrong as before.
      (Smaller grid sizes would be a big improvement for weather forecasting though.)

  28. Reading through the commentary here, I propose a variation of Godwin’s Law: If an internet conversation about computers goes on long enough, it will inevitably devolve into reminiscing about card reader / punch card programming. 🙂
    rip

    • At least nobody has talked about the really old computers that had to be programmed with toggle switches.

      • Well, now that you mention it, I did in fact see one of those… I had a cousin that worked at a bank, very early 1970’s.
        But it was an advanced version, you entered enough code with the switches (a “boot loader”) so that the computer could then read in an operating system from a mag tape to “boot” the machine. Comes from the term bootstrapping.
        My cousin was good at it, he could enter about 100 lines of code with the toggle switches in about 5 minutes. One word of caution, don’t tap him on the shoulder and ask him; “What’s that you’re doing” in the middle of it…..
        As non-volatile memory advanced the “boot loader” became the BIOS: Basic Input Output System. Still in use today, first thing that happens after power up (tablet, I-Phone, laptop, mainframe) the processor starts executing the BIOS code. This adds support for floppy disks, hard drives, com ports, etc.

  29. ripshin: Oh please. I used punch cards for a few years and paper tape as well. A giant pain. Don’t miss them in the slightest. I remember when disk drives were measured in kilobytes (DEC DF-32 anyone?) and the primary I/O was a KSR-33 Teletype. UGH! My ancient desktop has 2.5 GByte memory, 250 GByte hard drive, a nice lcd monitor, with 5 in. optical disk for backup. 15 years old. Newer stuff is far better. (Well, maybe not software …) As for the past, good riddance.
    Mark W: Fuzzy Logic has it’s place. The circular file comes to mind.

    • Mostly I agree. However the first remotely modern computer I worked on (the previous ones had vacuum tubes) was the CDC1604. It used a modified IBM Selectric typewriter for a console. The film ribbons were a nuisance, but the keyboard was far superior to anything I’ve ever seen hooked to a modern PC.

      • > Do I remember the dishwasher sized IBM disk drives with the removable platters?
        All too well. BTW if you put one of those that someone had dropped on the floor thereby bending a couple of platters into the disk drive and pressed LOAD it made a most interesting and quite expensive noise when the heads crashed into the bent platters.

  30. It is quite possible the brain also performs quantum computing at the molecular level. The usual objection, that this environment is too hot (i.e. body temperature) to maintain quantum coherence for any length of time does not hold water, because the system is as far from thermodynamic equilibrium as it can be. Subsystems of non-equilibrium pumped systems can stay at an arbitrarily cold virtual temperature for extended periods, as it is demonstrated by laser cooling.
    If power supply of neural tissue is turned off, that is either its incoming sugar or oxygen stream is severed, the brain suffers instant, irreversible structural damage. Quite odd behavior from an organ, which is meant to last for many decades. It shows Nature tried to solve a nearly unresolvable problem here (due to some irresistible evolutionary pressure), rather successfully, I must add. What this problem might be is anyone’s guess, but it may well be quantum computing or something even better. Who knows?

    • It is quite possible the brain also performs quantum computing at the molecular level.
      Is this not just one of those models proponents of quantum computing use to assert its merits but with little evidence. They can actually map the signals in a small number of neurons. What is clear is that the signal is amplitude “modulated” in that the intensity of the signal influences output. In short, neurons work in analog not digital. This means a single neuron connection can have, in theory, an infinite number of states (but only one state at a time – not quantum). But the power really comes when you add to that the parallel nature of how the brain processes data. Millions and millions of neurons can act together (and remember each connection can transfer information from a possible infinite number of states) in different parts of the brain. Work on simpler creatures where the cells have been doped with a photo-chemical marker that show how a neural system works as a system.

    • The interruption or loss of sugar or oxygen stream does not always cause the human brain to suffers instant, irreversible structural or functional damage. Now I specifically noted the human brain because there are a few animal species that when the weather turns cold their bodies and brains will freeze solid ….. and will remain frozen until the Springtime warm temperatures return and defrost their bodies …… and they will continue doing what they had been doing when they were frozen.
      The brains of higher level animals, especially the human brain , ….. is a biological self-programming super computer that is capable of increasing its memory capacity (growth of new neurons and their respective synaptic connections) if and when it is needed.
      The architecture of a biological computer is unlike that of an electronic computer in that the latter utilizes a physical storage addressing scheme for the storing and/or recalling of info or data ……………… whereas the former (brain) utilizes the data or info itself for the storing of new data or the recalling of previously stored data.
      You might find this commentary of mine an interested read, to wit:
      Biology of a cell: Genetic memory verses Environmental memory

Comments are closed.