I tried testing the Bernstein Vazarani algorithm on a 5 qubit quantum computer that IBM let us use for an afternoon for one of my university classes last semester. It was unable to recover the hidden string, even though simulating the circuit classically recovered it. Anyone can launch a "quantum computer as a service" platform, but the service won't be valuable if the computer doesn't work. And so far no quantum computer has solved a problem faster than we can do classically
The key question is whether QC algorithms are scaling as predicted by QC theory, or at least better than classical algorithms for the same task. If the error correction is causing them to scale poorly, the design is probably infeasible for large-qubit calculations.
I expect it is scaling poorly, or IBM would have reported otherwise when moving from 5- to 16-qubit machines.
No, the transistor replaced the vacuum tube, which everyone was already using. When the transistor was invented everyone rushed to use it because it was clear what it was good for.
My point is that QC-s have the potential to be as revolitonary as transistors were. Yet we are at the level when transistors were mostly used to make radios portable. Thus it's not really justified to mock them as lame factoring devices.
Perhaps a Leyden jar, or Aeolipyle. Novelties with great potential. Or perhaps, they're just a drinking bird kind of novelty - i don't think so, but maybe. QM is rock solid, but it doesn't tie together well with our other theories about the world. Maybe knowing more will make the error correction easier somehow.
I kinda think we're in the difference engine phase of quantum computing. Same problems as Babbage building a neat thing, but we're missing some details and our tools aren't really sharp enough to make a good one.
I read that somewhere (can't recall the source) that it was a nice gadget, but it wasn't an overnight sensation. You are right, MASER/LASER is a better analogy.
If you are talking about transistors, when they were invented the only other alternatives were triodes (vacuum tubes) or mechanical relays. Both were orders of magnitude less reliable and slower in terms of switching speed. When transistors were invented it was a mad dash to get them cheap and small enough to replace everything else, because everyone could tell they were the future. It took 9 years between the first transistor and receiving the Nobel Prize for it.
I don't know where you read that but that is not the story that I'm familiar with.
Applications for the transistor were known before the device even existed.
The problems were first to make them reliably, then to make them in quantity and finally to make them cheap enough. All those were overcome and the revolution that followed has not stopped even today.
Yes, and the reason the government rushed to it is because they wanted the absolute best technology (that was orders of magnitude better than anything else) and would pay for it to have a leg up over their enemies. If they were cheap right when they came out, everyone would have bought them.
The point where it gets interesting for realistic physics and chemistry applications is around 100 (error-corrected) logical qubits and 10^8 coherent operations, see for example https://arxiv.org/abs/1510.03859.
The error correction adds another factor of at least 100 or so in both qubits and gates needed (but possibly much bigger than 100, depending on qubit quality), see for example https://arxiv.org/abs/1312.2316.
Other fields of application - factoring large integers, for example - takes many many more qubits to be interesting.
While it's good to get people excited about the potential of quantum computing, it's seems a bit disingenious to suggest that a 17-bit quantum processor is commercially interesting. I especially like how they juxtapose it with the publically available 16-bit quantum processor to make it seem like one extra qubit makes it worth paying money...
1. Do they claim that the backend is actually a quantum computer? Or are they just providing a quantum computing like interface, which is backed by a classical computer emulating a quantum computer?
2. Looking at their Terms Of Service (excerpt below) it's unclear whether you share rights any model that you try in their playground?
<i>"IBM does not want to receive confidential or proprietary information from you through our Web site. Please note that any information or material sent to IBM will be deemed NOT to be confidential. By sending IBM any information or material, you grant IBM an unrestricted, irrevocable license to copy, reproduce, publish, upload, post, transmit, distribute, publicly display, perform, modify, create derivative works from, and otherwise freely use, those materials or information. You also agree that IBM is free to use any ideas, concepts, know-how, or techniques that you send us for any purpose. However, we will not release your name or otherwise publicize the fact that you submitted materials or other information to us unless: [...]"</i>
I know next to nothing about quantum computing, but something about this seems extremely fishy. Had they developed a working quantum processor, would they not have publicized its capabilities far and wide before releasing an enterprise API? Failing at that, would there not be a benchmarks page that demonstrates the capabilities of this system? Could someone better acquainted with this technology weigh in?
Even if their quantum computers work, at the scale they operate today everything they do will also be doable by simulating a quantum computer on an average PC with some slowdown. It's just "now we can do it on a real quantum computer".
They would have to rapidly increase the bit number to produce anything that's useful. That may very well happen in the future, but nobody knows for sure.
Indeed, someone spends what AFAICT is a very small amount of his time debunking every last claim that any given quantum computer does anything straightforward simulated annealing can't do.
Not any given quantum computer -- Scott Aaronson debunks the D-Wave "quantum" annealers that aren't actually shown to be harnessing the quantum power.
IBM's machine by contrast is a genuine quantum computer -- the kind Scott Aaronson would probably have no problem with. But the number of qubits is too tiny to do anything interesting.
> While technologies like AI can find patterns buried in vast amounts of existing data, quantum computers will deliver solutions to important problems where patterns cannot be found and the number of possibilities that you need to explore to get to the answer are too enormous ever to be processed by classical computers.
There's some pretty cool problems in quantum systems like chemistry that even a classical supercomputer would not be able to solve
This could have a very positive impact on fields like medicine
I am not saying what quantum computing is or isn't doing. Contrasting quantum computing and AI is like comparing... digging a hole and shovels. "While digging a hole is useful, these shovels are even better than hole-digging!" It's nonsense sauce. If quantum computing can be useful at AI tasks, then quantum computing won't replace AI, we'll be doing AI on quantum computers. This is just marketing mumbo jumbo to convince some C-suite type that doesn't know a thing about data science to say, "Why are we still using AI and not this quantum thing IBM has? Quantum is better than AI!"
Yes, yes, real AI died in the AI Winter because the settlers didn't have enough parenthesis to last until spring, it was all very tragic, and now we're all just stirring the pile of linear algebra until the results look good. [1] But that doesn't make IBM's marketing copy any better. And IBM loves making grandiose claims that don't work out in practice. [2]
A five-qubit processor has been available for a bit over a year now [1]. Last week, they've announced that they'll make freely available a 16-qubit processor, but for now it is invite-only. They've also announced that they'll sell access to a 17-qubit device.
The Google group in Santa Barbara is building to a 49-qubit computer. If I understand correctly, it'll be interesting for understanding how you calibrate your quantum machine and validate that it is doing what it is supposed to. The algorithms they run on classical computers to validate that device will need a decent amount of supercomputer time.
And much bigger devices would be impossible to validate with a classical computer in the same way, so in some sense you hit "the limit" of classical computation and start to move beyond.
But for the computational problems that are useful for applications, which are not very much like the problem they use for validation, 49 qubits is still far far away from beating a classical computer.
"IBM is giving users worldwide the chance to use a quantum computer; Google is promising "quantum supremacy" by the end of the year; Microsoft's Station Q is working on the hardware and operating system for a machine that will outpace any conventional computer. Roland Pease meets some of the experts, and explores the technology behind the next information revolution."
Yep...that was the one...further on in the broadcast the researcher from Google explains that for "Quantum Supremacy", that "49-qubits" are enough to "leave the fastest classical computer in the dust" or some such hype.
16 qubits? How hard is it to simulate those 16 qubits with a regular computer? I get this is marketing but people would be better off just running a simulator at this point.
With 16 qubits, the Hilbert space has 2^16 = 65536 elements. For numerical simulation, you need to store the wavefunction amplitude for each of these elements, usually using a double-precision complex number (16 bytes). Altogether the wavefunction can be stored as a vector with 1 MiB of data. AFAIK all quantum computing algorithms can be expressed as a sequence of 1- and 2-qubit operations, which would be implemented as sparse matrix-vector products. So the simulation part is almost trivial. ;) I'd say that for 16 qubits, designing the quantum circuit to perform an actually useful operation is the far more difficult part.
However, once you start adding more qubits the simulation part becomes hard to infeasible. I think I saw some paper which simulated around 40 qubits. Here the wavefunction already needs 16 TiB, so you need a big HPC cluster to run that.
It's pretty easy. If your computer has enough ram to store a size 2^16 length complex vector (which is 2^20 bytes, or 1MB) than you can open up an ipython notebook and write code to apply quantum gates to it with no problem.
The problems start to set in if your RAM can't hold the wavefunction in memory (so around 28 qubits, which takes 2^32 bytes = 4GB of RAM.)
With specialized code and supercomputers you can get a little farther, but you will be fighting exponential growth, so not too much. The practical limit for classical computers is in the 40-50 qubit range.
They seem to be running on inertia at this point. I haven't seen any plausible new enterprises from them. It's possible I'm looking in the wrong places, though.
Fully agreed, but they've been running on inertia for the last two decades, but that has nothing to do with profitability. They're still printing money, every quarter and are paying out substantial dividends.
Obviously that can't go on forever and they're desperately searching for a path to a viable future but for the moment they are definitely in the black and will - as far as I can see - stay there for quite a while to come.