Guest essay by Eric Worrall
The Register has published a fascinating video about Quantum Computing, an interview with D-Wave, a company which manufactures what they claim are quantum computing systems.
According to The Register;
It turns out that there are three broad categories of problem where your best bet is a quantum computer. The first is a Monte Carlo type simulation, the second is machine learning, and the third is optimization problems that would drive a regular computer nuts – or, at least, take a long time for it to process.
An example of this type of optimization problem is this: Consider the approximately 2,000 professional hockey players in North America. Your task is to select the very best starting line-up from that roster of guys.
There are a lot of variables to consider. First there’s all the individual stats, like how well they score, pass, and defend. But since hockey is a team sport, you also have to consider how well they work when combined with other specific players. When you start adding variables like this, the problem gets exponentially more difficult to solve.
But it’s right up the alley of a quantum computer. A D Wave system would consider all of the possible solutions at the same time, then collapse down to the optimal set of player. It’s more complicated than I’m making out, of course, but it’s a good layman-like example.
Read more: http://www.theregister.co.uk/2016/04/18/d_wave_demystifies_quantum_computing/
The following is the video of the interview;
A note of caution. Quantum computing is still in its infancy. There is substantial skepticism expressed in some quarters, about what is happening inside the box, about whether the D-Wave system offers a performance advantage over conventional computers.
For example;
A team of quantum-computing experts in the US and Switzerland has published a paper in Science that casts doubt over the ability of the D-Wave Two quantum processor to perform certain computational tasks. The paper, which first appeared as a preprint earlier this year, concludes that the processor – built by the controversial Canadian firm D-Wave Systems – offers no advantage over a conventional computer when it is used to solve a benchmark computing problem.
While the researchers say that their results do not rule out the possibility that the processor can outperform conventional computers when solving other classes of problems, their work does suggest that evaluating the performance of a quantum computer could be a much trickier task than previously thought. D-Wave has responded by saying that the wrong benchmark problem was used to evaluate its processor, while the US–Swiss team now intends to do more experiments using different benchmarks.
The abstract of the paper;
The development of small-scale quantum devices raises the question of how to fairly assess and detect quantum speedup. Here, we show how to define and measure quantum speedup and how to avoid pitfalls that might mask or fake such a speedup. We illustrate our discussion with data from tests run on a D-Wave Two device with up to 503 qubits. By using random spin glass instances as a benchmark, we found no evidence of quantum speedup when the entire data set is considered and obtained inconclusive results when comparing subsets of instances on an instance-by-instance basis. Our results do not rule out the possibility of speedup for other classes of problems and illustrate the subtle nature of the quantum speedup question.
Read more: http://science.sciencemag.org/content/345/6195/420
Quantum computing in my opinion is a goal worth pursuing. Even if the D-Wave system does not fulfil its promise, this will hardly be the end of the Quantum Computing effort. The goal, of harnessing almost unimaginable computation power, of being able to solve problems which simply can’t be tackled with conventional computers, is simply too attractive to abandon.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

MIPS are no substitute for good design.
The “C” language and processors specifically designed to run it have set back computing and software development significantly. I’m a big fan of zero operand instructions and stacks. Because no computer does 2 + 2 At the hardware level you must do 2, 2, + because you can’t perform an operation without all the operands. So your compiler must translate. A LOT. That makes them complicated and quirky. With no two having the same quirks.
Aren’t you simply arguing for algebraic over reverse Polish notation ??
It’s the old HP versus TI battle.
I’ve always used reverse Polish calculators, and have found it much preferable to me.
g
I’d change that to “With no two having the same quarks”.
Notation is irrelevant. Algebraic or reverse Polish can be compiled into optimal machine language.
Great app called Droid48 that simulates an HP 48G/GX or a 48S/SX on your android phone. If you have a large phone, like a Samsung Note4 or Note5, it works great. Brings back nostalgic memories of college. You can then reverse-polish until you wear out your fingers. I still remember programming my 48GX… using the stack made for simple and compact programs… superior to any TI at the time.
Computers don’t do either. They take two operands and use hardware to perform a function on them.
Processors don’t run C code they run machine code. C code is compiled into assembly code and assembly code is converted into machine code.
Even a perfect design can only take you so far.
You need both.
From a business standpoint, sometimes it’s better to invest in a faster processor than it is to invest the development time to make your code a little more perfect.
True. My complaint is that Climate Science™ cries that their badly designed Global Circulation Models would be fine if they just had more computing power.
Harnessing Stupidity: Quantum Computing Lesson from Justin Trudeau
Don’t malign PM Zoolander: he’s just what the low-information electors demand, and just what the lazy media give them.
You missed his closing comment where he recited pi to the 19th decibel.
pi = -sqrt(-1).Ln(-1)
And that is exact to the 10^43 decibels..
g
One wonders what the value of Pi would be if left in the hands of the GISS? Would it be 4 by now?
That’s loud!
Pi to the 19th decibel, eh? That be mighty loud, methinks!
Unlike his deficit, which runs out about 19 digits
The one good thing about Turdeau is you don’t have to make this stuff up.
19th decibel? Just how load is your Pi?
My Dear Leader!
Is that chalk dust on his fingers?
There is another use for it. We need a climate policy fraud detection system. There are a great many variables at play and differing data and model qualities. The bias aspect is a fairly simple computational component though. It needs to include money and greed as parameters at a minimum.
I’ve followed AI since the 70ties and QC the past 10 years. They both appeared feasible, but eventually it still seems to me it is like trying to get to the moon by climbing a tree; at first you believe you are making pretty good progress …
Well said. I first studied AI 30 years ago and they said it was just about to break through (programming in LISP etc.). The mistake they make is they reduce intelligence to just logic. Intuition, emotion, bias, whims, irrationality etc are all an integral part of intelligence too. I doubt they’ve had much progress with those aspects.
Well there are all kinds of special purpose computers that solve only a limited number of problems well compared to other ways of solving that problem.
With Fourier Transform optical systems you can do millions of Fourier transforms live and in real time, all at the same time. This can convert a map of arbitrary spatial information (picture) into a 2D map of a spatial frequency spectrum. You can then apply simple spatial filters to remove or enhance or otherwise modify some frequencies and not others, and then a similar optical system can perform millions of simultaneous inverse Laplace transforms, and recreate the original map (picture) with some frequencies absent or modified in some ways. It would take terra-flops of conventional processing to do what a laser and a few lenses can do in real time.
So QC has yet to prove its value
g
Do you mean lenses, or something else?
I think I wrote ” lenses “, well actually ” a few lenses “.
So ergo, I must mean lenses, and NOT something else.
G
The article stated:
“ A D Wave system would consider all of the possible solutions at the same time,”
And George E stated:
“ you can do millions of Fourier transforms live and in real time, all at the same time.”
The only computer that I know of that is capable of doing several different things “all at the same time” …… are the biological computers commonly referred to as the “human brain”.
I was under the impression that the main advantage of quantum computing was size. You pack a 4 inch cube with Josephson junctions, immerse it in liquid hydrogen and voila, you have your entire supercomputer (except for all the interfaces).
You had the wrong impression.
Hi Eric –
After living through the 80’s, 90’s and part of the Naughties in the computing industry, my reaction to “quantum computing” and it’s promise is lukewarm to say the very least. I do have some popular science level understanding of the proposal which has, frankly, left me cold.
Parallel processing is not new. A hardware architecture that actually uses quantum effects to store and process information (q-bits) doesn’t exist to the best of my knowledge and perhaps never will. Indeterminacy of state isn’t something normally pursued in the computer sciences, nor can I imagine it being of any particular use in that application.
I very much suspect the current batch of commercial “quantum” computing solutions are software based rather than hardware, and the software isn’t revolutionary. In the 70’s there was quite a bit of attention given to what we used to call “single assignment languages”. These were computer language compilers designed to work in congress with mildly specialized hardware to enable parallel processing by an array of CPUs that shared a special type of memory, which allowed the machine to detect when a write operation had been performed to a particular memory location (address). The executable code was available to all processors in the array and each processor blocked until the data it needed to proceed became available (the location was written to).
This technique allowed for what was then called “massively parallel” computation. It wasn’t cost effective at the time and it had problems. Those problems were never (at the time) addressed because Moore’s Law was in effect back then and it simply didn’t make economic sense to pursue.
Moore’s Law has reached an apex and is now in decline. We can no longer expect a doubling of speed/density every 18 months simply due to advances in physical fabrication processes. Now we turn to software, and single assignment languages/massively parallel architectures are coming to the front again.
Quantum Computing, like Artificial Intelligence and Expert Systems, is another techno-babble buzzword suitable for attracting venture capital. Nothing more. Does it have promise? Yes. Is it Quantum? No.
” Indeterminacy of state isn’t something normally pursued in the computer sciences, nor can I imagine it being of any particular use in that application.”
Indeed, a certain large computer company with a 3 letter name spent many tens of millions of dollars on Josphenson Junction technology (possibly quite speedy).
Then they discovered that a on/off switch based on said technology came up “I just don’t know if I’m ON or OFF) a few percent of the time.
Awful hard to perform any useful computing if the machine does not know which bits are on or off.
Cheers KevinK
Are you saying it was HAL that invented fuzzy logic ??
g
Hear, hear.
Bottom line, if you want to show “quantum computing” is faster, you need to show, mathematically, how the number of steps is reduced, not just imagine some magical parallelism that exists without specificity.
Here’s an example of real world “I can do something faster”: https://en.wikipedia.org/wiki/Carry-lookahead_adder
You can increase speed of computation by reducing the number of steps, or improving the speed of each step -> “q-bits” do neither. There simply is no magical information to be found that isn’t expressed in boolean logic with ones and zeros.
Or even better, show it with the wonderful power of a stopwatch.
Actually they are trying to solve traveling salesman type problems, where the overall answer is an accumulation of numerous not yes or no answers.
The only way to do this in current computers is to iterate all possible paths, in theory QC it will do many at once.
As for other hardware, main stream (ie Intel) processors are hard to beat, they are at the front of the technology because they sell so many processors, and then pair that with the floating point capability of video processing cards, I think a pretty modest amount of money you can have some pretty fricking powerful computing power.
At least compared to past super computers.
No you do NOT have to check all possible paths. Look up “branch and bound” for example. Also there’s a whole lot of theory about approximation algorithms and randomised algorithms, where you may be able to get near enough, or to be wrong with practically insignificant probability.
You are correct, some of those types of solutions were likely used in electronic design trace routers. I was being sloppy.
Bartleby said:
I concur, ….. 100%.
D-Wave was challenged in 2014 on somewhat questionable grounds.
The computer works. You can feed it problems, it sets up a scientific experiment based on computational configuration and then the quantum computer calculates a result. Analysis of the experiment results show it solved problems such that it found answers and the answers were the kind of answers expected of a quantum computer. So, the debate is ended if the D-Wave is actually doing quantum calculations or is some elaborate hoax with say software simulation going on somewhere inside.
The criticism was that the time to do the calculation compared to hard coded solutions with conventional computers. This is a 1st generation quantum computer from D-wave with 5 years of development behind it that was tested against 60 year 60th generation linear computers and in some cases it was substantially faster than a hard coded competitor. In many cases it was slower or the same. Since these experiments were carried out D-Wave has introduced another generation computer with 2 times as many qubits. Since quantum computers grow in performance exponentially with qubits in 5 or 10 years a single quantum computer may have greater computation capability for certain problems than the entire computing capacity of all computers made today.
Will quantum computing replace von neuman computing? No. Are the types of problems amenable to quantum processing worth solving faster? Yes.
So based on JT’s description “much more uncertainty = more information.” Interesting, very interesting, I guess the whole premise of my Doctoral Dissertation is just a sham, good thing Trudeau wasn’t on my committee, (of course he would have been 17 at the time, but of course,Doogie Howser started his residency at 15).
Well he did tell Canadians that they needed to “rethink elements as basic as time and space” during his election run.
I am thinking about how much longer I want to live here!
Well arguably, white Gaussian noise, being infinitely unpredictable; in the sense that no future state can be predicted from an infinite string of previous states. is 100 % information (about the state of the system.)
G
So what did YOU dissertate on the subject ?
Uh, I think that would be 100% of the available information, which is zero.
These assertions of unpredictability are groundless. All you need is an agenda, and predictions are easy.
Nothing new. Stargate called it a “Z-point module”.
(OOPS! My bad. This was about “computing power”, not “power storage (batteries)”. Maybe chuckle but otherwise, please ignore this comment.8-)
More akin to the Hitchhiker’s improbability drive.
Ah the zero-point energy source, made of unobtanium and sold in the new Ba’al & Molock catalogue.
Since the beginning of the information age, software engineers have struggled with keeping up with the advancements of hardware and so advancements have been artificially stunted. A prime example is the video game industry where the advancements in hardware are literally withheld so as to give game developers a chance to release a game for their chosen platform which can take years if not some times a decade such as Final Fantasy 15.
The point I’m trying to make is even with advent of quantum computing, the sad reality is, software engineers will always be leagues behind hardware engineers.
Yes Dog, I agree. My brother is a video game programmer and he developed a pilot for a ufo game using curved polygons when that came out 15 or more years ago. It looked great and required much less computing power, but everyone was already too invested in flat triangular polygons to change.
Well when software mistakes are left in place, rather than fixed, and an endless layer of band aids is plastered on top, then it is no wonder software is so inefficient.
I can’t even run my M$ Windows computer full time, because it keeps using more than half its time downloading and installing new ” patches “; before I can use it to do anything.
g
That’s a “feature”!
Having misspent some of my youth doing software testing and configuration control back before most of folks now alive were born, I have some sympathy for MS, Apple, et al. But frankly I don’t think it’s possible to patch modern software with it’s vast attack surfaces and overwhelming complexity into something safe and reliable.
I gotta tell you. At the rate of 15 or so bug fixes a week it going to take a while to squash 10^god_alone_knows_what_power software bugs. And, of course, people make mistakes, so they’re almost for sure going to brick your computer a number of times.
Welcome to the Internet of Horrors. Enjoy your visit.
If your usage permits it, you might give some thought to one network connected computer for everyday stuff and a second, non-network connected computer for real work. No, I’m not kidding.
Oh, I don’t know if that is universally true. I’m getting a dual nVidia K-80 machine in tomorrow specifically because the dual K-10 machine is not powerful enough to do what I need to do without heavy sacrifices in performance (signal to noise ratio performance). The software I wrote 4 years ago to solve my problem is largely unchanged simply because the algorithms I use have no reason to be changed (efficiency issues aside).
The bottleneck I almost always face is data throughput: the processor is stalled because of the time it takes to load data into the pipeline. Quantum computing will not solve this any better than an improvement in software/compilation will. Certainly you can make all of your storage fast static RAM (what caches use), but it’s expensive and power hungry (heat kills, too).
There was much fanfare about memristors at some point, IIRC.
Mark — You’re right of course. But, to the very limited extent that I understand QC, processing massive amounts of data is pretty much exactly the opposite of what QC is supposed to be good at. What QC, if it can be done, should excel at is performing incredible numbers of simple computations in parallel and somehow — it’s never been clear to me exactly how — delivering up the best solution. For example it should presumably be able to try all 2**1024 possible 1024 bit keys to a sample of encrypted text and deliver up the key that produces something readable.
Since it sometimes seems that nothing involving quantum mechanics is too far fetched to be impossible, maybe QC will someday be able to do exactly that. But I think it might be a while before every hacker in the world can use their $17 quantum box to access anybody’s bank accounts, communications, and digital door locks.
The job of a software engineer is to get the most out of the current hardware. If hardware is held back it’s a financial decision on how to profit from the waves of obsolescence. Consider the price structure of Intel chips, a small change in performance is supposedly worth a huge step in price.
I haven’t used DIP packages on a PCB in ages. If you are going to illustrate gee wiz…..
Quantum computing … should become a reality just after the first cold fusion reactor goes on line.
I’ll believe it when I see it!
..OMG..If the Quantum Mechanics Computer finds it hard to pick the best hockey player in the NHL, then it is only good for scrap !
Three of my five kids are into this kind of thing but it is way beyond me.
Five of my three kids are better at math than I am.
There are three types of people in the world, those that are good at math and those that aren’t
Do not worry about a dyslexic mistake John, I understand that 10 out of 4 people are dyslexic.
There are 10 types of people in the world.
Those that understand binary, and those that don’t.
David, did you hear about the dyslexic, insomniac, agnostic?
He lay awake at night, wondering if there was a dog.
The problem with a computer that ‘solves’ problems that no other computer can, is that you may not be able to verify that the answer is right. I realize in this post normal science world – feeling it is right may be enough, but it is nice to have verification.
Some problems such as cracking cryptographic cyphers, one of the proposed uses for Quantum Computing, are easy to check once you have the solution – it is finding the solution which is difficult.
That’s correct, and of course decryption is the Holy Grail when it comes to peddling your QC box for BIG bucks and retiring to cruise the South Seas on your palatial yacht. Furthermore, for decryption, it probably doesn’t matter all that much if the correct answer leaks out during computation and isn’t there when the computer condenses to it’s solution (I think leak and condense are the terms they use). You can always try again and see if the next run generates a usable answer.
Unfortunately, I believe investing in this has less to do with retiring to the South Seas than it does with the South Sea Company of old England.
This has nothing to do with feeling or post-normal science. In many cases it is much easier to check whether a solution is correct than it is to find the solution.
They’re fantasy. #1 translating information from reality to quantum: doesn’t work. #2 all at the same time but with actual output? We have that, its called congress/parliament – its why we have single speaker rules. #3 how do you program a quantum computer when adding data to the program changes what the program means?
D wave is not a quantum computer, it is designed to do some things very fast. Praising d wave for fast processing of specific tasks is like praising a batting machine for batting a high average.
Reminds me of the Deep Blue beating that Chess player, Deep Blue was fed the knowledge of three chess masters. It didn’t “think”, it simply ran down the numbers weighed against the input of chess knowledge in a form that software could use.
AI and quantum computing are far behind the hype
D-Wave uses adiabatic quantum systems, which do not give you the immense speed-up for certain types of calculations that a system of entangled q-bits would give.
Shor’s algorithm, in theory, would be able to crack the vast majority of today’s cryptographic systems, including RSA, but so far nobody has been able to create one containing more than the 10 q-bits necessary to factor the number 21. Every additional q-bit is much harder than the next, and nobody has managed to scale it up to the thousands needed to crack today’s cryptographic messages.
D-Wave isn’t attempting to do that. There are other things that D-Wave’s “quantum” computers may be better at than classical computers, but speed is not one of them.
Concerning quantumn computing, our Glorious Canadian Leader had this to say …. in response to a press question on ISIS.
https://youtu.be/4ZBLSjF56S8
He was at an event announcing $50 million to the Perimeter Institute of theoretical physics at the U. of Waterloo. His understanding reminds me of another great theory by Anne Elk https://youtu.be/fTOH8QK-6HA?t=3
QC, neural networks, fuzzy logic, AI, always touted with much fan fare at first only to fade away (usually).
Encoding lots of info into a quantum is only half of the problem, you also need to decode it somehow to get the info back out.
AI has come around the “breakthrough” loop about three times in my career.
One of the electrical engineering trade newspapers (EE Times) always did an April’s Fool day front cover.
Spoof of the year circa ~1980 was: IBM’s groundbreaking work in “Artificial Stupidity”. The spoof goes on to praise IBM; “Leave it to IBM to go where no other tech companies are going to break new ground”. And fantastic performance specs; IBM’s “ASS” (Artificial Stupidity System, of course) could calculate the wrong answer 3 million times faster than a human…..
Another famous spoof was the first WOM – Write Only Memory IC with 1000 times higher density than any other memory chip… This one was a real hoot with a great faux data sheet, one of the charts was “Pins Remaining versus Number of Insertions”. And of course the time delay to readback data was… you guessed it – infinity…
Fuzzy Logic was all the rage back circa 1990, it could do everything better. One of the examples touted was how a fuzzy logic controller could make an electric heater warm up to max temp faster than using a “old fashioned” thermostat. The only thing that determines how fast a heater can warm up is the power supply, ain’t nothing the controller can do about it.
So at this point I would rate QC as another fad, but you never know. Just don’t waste my tax dollars on it please.
And the thought of a politician explaining technology is a real hoot.
The current POTUS a few years back explained how bailing out a US car company by giving it to a European car company got us access to the “Technology of Small Cars”…
What technology pray tell ? The big secret is…. all the parts are smaller…… Like making 13 inch wheels is a HUGE technological leap forward from making 16 inch wheels….
Cheers, KevinK
(Old EE Times spoofs paraphrased from memory)
13 inch wheels is a huge technological leap from 16 inch wheels, because it reduces the unsprung mass significantly..
G
And they go around quicker so you get there faster.
After a little research I believe the spoof headline from about 1980 was more like;
“While everyone else pursues Artificial Intelligence leave it to IBM to make new breakthroughs in Artificial Stupidity”.
And a simple $10 fan with an Off/Low/Medium/Fast speed selector switch is “technically” an example of Fuzzy Logic. While an old fashioned fan with an On/Off switch is “technically” not Fuzzy Logic.
Mostly buzzwords…
Please, no. Fuzzy logic is ‘is the state 0, 0.1, 0.2, 0.3…1.0 ?…not: is the state ‘0’ or ‘1’?
I.e. its not binary, its weighted logic.
Giving it a silly name opened it up to disrespect, but its a valid approach to some problems.
Write Only Memory… that hurt. 🙂
With quantum computing will we will be able to, without opening up the computer, find out whether the cat is alive or dead?
Yes. A problem is programming that cat for the problem before putting it in the quantum box.
If you were allowed to open up the D-Wave box you would find some old fart programmer writing in Assembler.
Hey, what’s with the “old fart” business. I represent that.
> If you were allowed to open up the D-Wave box you would find some old fart programmer writing in Assembler.
Or possibly a really pissed off cat.
Thank you!
I resemble that remark!
Real old farts write machine code.
“Real old farts write machine code.”
I knew a bloke once who used to input the boot loader of his Altair in binary via the DIP switches from memory…
One of Rockwell’s old ICBM programmers told me that in the missile crisis of ’63 the nearest the US couple place a nuke was about 150 miles, give or take, from Moscow. The claims for precise targeting were hubris. I asked why and was told the basic reason was the ICBM had only 4k of memory. The entire targeting system had to fit into 4k of RAM including the star recognition, geolocation, steering and aiming systems. The earth is not round, gravity varies, there is wind…
How did they do it at all? Everything was written in machine language! Not a single bit was wasted. Efficiency was all. A dedicated processor with dedicated no-fat software can run far faster than something generic.
The quantum computer the Perimeter showed at their tenth anniversary public bash in Waterloo had 8 QBits. I asked how many times they had to read the state of the atom to get a definite answer. 30,000,000 times.
It could multiple 2*3 and add 2. It did it slowly, relatively speaking. So if in fifty years they completely solve the ‘state’ problem it will be 30,000,000 times faster, at least. Hopefully it won’t get the wrong answer(s).
Loved the WOM story. (Write Only Memory) . Know some people like that.
It like the CVOS. A buddy of mine worked for a company that did process monitoring sensors and such, and for a particular customer he had done a lot of work to prove that once their process was tweaked in it was remarkably stable. Management, undeterred, decided they needed an additional sensor so they could close the process loop and get even better performance.
Convinced it was a waste of time, but being told he must produce said sensor he came up with the CVOS – the constant value output sensor. Just needs a 9V battery and it’s good to go, no adjustment required. All the engineers in the room about died trying not to laugh while the management was all ‘this is great!’. I still chuckle ever time I think about it.
I’ve seen some that was written in write-only memory, because nobody could read it afterwards. Mostly Perl or APL, with the occasional TECO.
Quantum computers could greatly accelerate machine learning
http://phys.org/news/2015-03-quantum-greatly-machine.html
“may be able” that was enough to see.
I also saw the words “could”…”may be able to”…”in some cases” and “often”.
You have to subscribe to a strange interpretation of QM.
http://blog.darkbuzz.com/2015/04/concise-argument-against-quantum.html
http://blog.darkbuzz.com/2014/11/the-universe-is-not-quantum-computer.html
I started programming in 1967, PDP-8/i.. QC, if memory serves, appeared in the 70’s and would make everything else obsolete “Real Soon”, so fast that it would have the answer before you asked the question. AI was about the same, it would even ask the question. Just around the corner. Now, old and gray, still waiting.
And Steve, don’t knock Assembler. Takes longer to write, but if well done, faster and smaller than anything else. Low power, small embedded systems.
And a thing of beauty to behold when it is well done
Yea! One old fart to another.
I stated in about 1974 using “APL” (A Programming Language) on an old “DEC Writer” (combination typewriter / sprocket hole paper printer) connected via a 300 (or maybe a 150 baud ?) acoustic modem. This talked to a computer 20 miles away.
First program I wrote; Blackjack (the card game). Whoo Hooo, heady times, I was one of about 3 students in my high school in the “computer science” group.
Yeah, AI, Flying cars, Bases on Mars, QC, lots of stuff “just around the corner”, still waiting to see just exactly where that “corner” is.
I did PDP-8, 8008, VAX assembler, some of the First DSP assembler code (TI and Analog Devices). Fun stuff, you could “feel” the bits moving around in the computer, some of the tools actually showed live updates of the bits changing in all the registers of the computer, sometimes you had 5 or maybe even 6 registers to watch.
In the old days we only had ones and zeros, and sometimes we ran out of zeros and had to use “O’s”, ha ha ha…
For the younger folks;
APL was a “symbolic” programming language with no memory or complier, you had to type in all you instructions one line at a time and try executing the program. If you typed in an incorrect instruction you had to “dump the memory” and start over by typing in all the instructions again from scratch. Very crude.
A DEC Writer was a keyboard and a one line at a time printer, you typed in a line of text and hit Carriage Return (the old fashioned version of “enter”) then you waited a few seconds to see if the computer liked what you wrote, if it was happy it would print out a single line of text on the paper in front of you and advance the paper so you could see what you typed, then wait for your next instruction. Very crude.
An acoustic modem was a really weird device, back then there was a wire (yes a wire) from the wall of the room you where in to a phone. One of those really odd old phones that was huge and had two big round ends connected by a plastic piece. Back then you would pick up the phone, dial a number, wait for a strange squealing noise and then stuff the round ends of your phone into two big rubber suction cups. I know, unbelievable isn’t it. And if it all worked you could type 20 or maybe even 40 characters a second on your keyboard then wait a minute or two for a response.
Then we moved all the way forward to color displays right in your hand with the computer inside always ready to connect to anyone anyplace…. And stored programs with a 100 different versions of Blackjack with a dozen color schemes to select from.
And now after all this progress we can finally (using Twitter ™) type in a single line of text, then hit “enter” and wait a while to see if anybody answers……
Ah, progress…
Cheers, KevinK
Oh, I forgot, after all this progress in science nobody but nobody can tell me the average temperature one week from now at my location to much better than +/- 5 degrees F.
KevinK
“Yeah, AI, Flying cars, Bases on Mars, QC, lots of stuff “just around the corner”, still waiting to see just exactly where that “corner” is.”
You forgot to mention ‘renewable energy that will compete on cost with nuclear and coal-fired power plants’.
APL had memory. There were workspaces with commands like )SAVE ws. When you defined a function every character was saved and could be edited with the built in editor. Even on an IBM 1130. The interpreter could outperform better known compiled languages, and several compilers existed. It’s still in use.
You old farts would be jealous of the system I get to write software for. 😉
Full disclosure: I actually SAW a card reader when I was 13, visiting what would be my high school later that year.
I moved on to punched cards as an input device for a batch processing system at college.
At least then you did not have to type in all the lines all over again to fix a simple typo. But woe on you if you dropped you “deck” and had to sort them all out.
I believe it was a “Boroughs Welcome” mainframe computer ? Or maybe a Honeywell ? Always with a number after the name, I think we had a 7700 ?
For a while there was quite a number of mainframe computer manufacturers.
But woe on you if you dropped you “deck” and had to sort them all out.
I dropped a deck once. Worse than having to sort it out was that the machine that punched the cards was old and no longer printed the statement at the top of the card. So I came up with the brilliant idea of running the deck thru the reader and then matching the cards to the printout so I could sort them from there.
Completely by chance, that was my first infinite loop. With a page feed to the printer in the middle of it….
> At least then you did not have to type in all the lines all over again to fix a simple typo. But woe on you if you dropped you “deck” and had to sort them all out.
Most software that used punch card input allowed you to put a sequence number in cols 73-80 of the card. Reordering a dropped or otherwise mangled deck was just a matter of 8 passes through a card sorter.
BTW, if you’re looking for a REALLY obnoxious input medium, don’t overlook paper tape. Not only was it fragile, it came in several different numbers of holes per column and in a variety of colors — some of which became transparent enough to confuse optical readers when wet or oily.
Dreadful stuff.
My second start at programming was much better than my first.
But it required the old computer card dance; write the code, enter the code onto cards, run the deck and submit the program, come back tomorrow for the output.
Only all of the card machines were beginning to fall apart at Penn State’s Ogontz campus.
The machine for punching cards didn’t print on them.
So after creating your card deck we used a second machine that didn’t punch anymore to enter the print lines.
Then we got to read the cards to check our typing.
Yeah, spilling a deck was a downer and sometimes a good thing as sorting cards took your mind off other problems.
Another problem was getting caught in a rainstorm and getting cards wet. Damp cards with thumbed edges shouldn’t be fed into card readers; the technicians don’t like you much then. Especially if there is a waiting line of impatient students
Moving to a community college with a brand new fully outfitted IBM mainframe, (system/370 4341) including a multitude of color terminals and even a room of IBM 286 PC-terminals with user configurable color.
No waiting for output, results were returned to our screen and we printed programs and output on demand rather than every time.
CMS operating system rather than the typical TSO, but very similar. An on demand TSO editor that seemed a miracle in those days.
Then again, I view many of today’s code editors as being minor miracles for inputting and reading code. What can I say, I’m easily spoiled.
“Don K April 20, 2016 at 12:13 am
BTW, if you’re looking for a REALLY obnoxious input medium, don’t overlook paper tape.”
It had it’s place such as in milling machines for repetitive machining work and worked quite well in my experience. This was before computer numeric control (CNC) systems.
First computer I touched: IBM 360, Waterloo U – high school math class day visit.
First program developed from scratch: Fortran 4, calculated n! It took me most of the day. Punch cards of course and a wi-ide carriage line printer.
I asked for assistance from a former student of my HS, enrolled in pure math. After a while scratching her head, she abandoned me saying, “We don’t speak the same language!”
System 360/370 were SNA architecture platforms. A 4341 I don’t think was Sys/370, Sys/360 probabaly. My first 4381 mainframe with 48MB of RAM. My mibile in my “skyrocket” has 1000 times more RAM.
Who you calling “old fart”? I resemble that remark!
Assembler was and is the way to fly. Higher level languages are just ways for people who can’t handle assembler to write code and we have been increasing computer speed and memory ever since they were invented in order to compensate for the bad/inefficient code people write. Assembler separated the men from the boys because you couldn’t “fake” it in assembler.
And yes, code IS self documenting .
Very true comment. IBM MVS/ESA assembler was very nice to work with in my experience.
Yeah, the code became self documenting when assembly phrases read like English to the mind.
IBM BAL Assembly here.
As satisfying as it is to write a solid Assembly program is still sweet to write a chunk of good code in many higher level languages.
Those forays Wills has taken us one using R language are very interesting.
I spent too many years coding in FOCUS and C with TSO snippets for db allocations to ignore them.
Though many times, I cursed what the code must’ve compiled to and went over my program seeking algorithms that were more explicit.
As my COBOL teacher told us day in and day out: Always code explicitly! Implicit code means that you are trusting some other bored programmer to make decisions for you.
A big part of the problem is that programmers have to write code that will run on many different machines these days. Add to that new processors coming out daily, it’s no longer worth the effort to become good assembly language.
High quality optimizers these days can compete with all but the most expert of assembler writers, and do so in a small fraction of the time.
My first two jobs were in 6805 and 80186 assembler.
Until you use the carry bit to increment the operand …
And Steve, don’t knock Assembler. Takes longer to write, but if well done, faster and smaller than anything else. Low power, small embedded systems.
Not necessarily true.
1/. FORTH was always more compact than assembler, and was ideal for very small RAM/ROM environments. Of course you can write FORTH programs in assembler I suppose.
2/. It depends on the code and how its written. IIRC Gary Kildall as a student was given some huge chunk of assembler to fix, He rewrote it in FORTRAN and fixed its bugs. It was smaller and ran ten times faster.
3/. It depends on the compiler. Years ago I made a living writing embedded code in C and assembler., Assembler was faster. More recently writing for *86 in C, I looked at the code as assembler. I couldn’t have written tighter code than that. C optimisation and efficient use of registers makes it very hard to compete using assembler.
Forth more compact and faster? Only with a poor Assembly programmer. After all, compilers produce Assembly (machine code) so at best they could equal a good assembly programmer. Certainly faster to program in high level languages, so in most cases, far more practical. And darn few of us old geezer machine code types. Probably helped that memory was $1/byte back then. 4K cost $4000. (and much bigger dollars than now) So we learned to write tight code. I do not have a photographic memory, I documented the daylights out of my code in case I had to come back and modify it later. My boss, a COBOL programmer was impressed. He could read what I wrote (IBM 360/30 BAL Rah!) I was writing “Structured” programs long before Yourdan invented them. Also self defense.
To be a really great “assembler programmer” one has to know exactly how the “processor chip” functions ….. so that they can “design” their program to take advantage of all of said “chips” attributes.
An assembler programmer, ….. (meaning the “assembler” converts each coded “instruction” directly to machine language (binary code), …… tells the “processor chip” exactly what to do and when to do it And it don’t get any better, quicker or more efficient than that.
One has to be an “original thinker” to be a really good “assembler” programmer.
Cheers, ….. from an ole computer designing/programming dinosaur. HA, I’se remember back when a 4 micro-second read-write memory chip was “faster than quick”.