My prediction for the next 10 years of linear optics quantum computing

People often ask me how long it will be before we have an optical quantum computer with post-classical capabilities. My answer, until recently, was that it will be a very, very long time. However, a recent landmark theoretical study by Aaronson & Arkhipov (here) suggests that the timeline could be much shorter than previously expected. They presented a model for optical quantum computing called ‘Boson-sampling’, which significantly relaxes the requirements for optical quantum computing, yet nonetheless likely implements an algorithm which cannot be efficiently classically simulated. From a quantum computing perspective this is exactly what we want – a computer that no classical computer in the world can efficiently simulate.

The model is very simple. We prepare a bunch of single photons, pass them through a passive linear optics network (i.e beamsplitters and phase shifters – no feedforward, no quantum memory, no post-selection), and then measure the photo-statistics at the end. Importantly, while implementing a classically hard problem, the model is (strongly) not believed to be universal for quantum computing.

Recently, numerous experimental groups have begun implementing elementary demonstrations of Boson-sampling. In the last few weeks there have been no less than four experimental demonstrations of three photon Boson-sampling (here, here, here and here). The demonstration by Broome et al. (here) was recently published in Science.

Since full blown optical quantum computing is a very long term goal, here’s my prediction for the next ten years.

In 2014 we’ll see another Science paper demonstrating four photon Boson-sampling. In 2015 there will be another Science paper with five photons. The following year we’ll see – here’s a scoop – a six photon Boson-sampling paper, published in Science of course. And this will continue, on and on, until the experimentalists run out of single photon sources and photo-detectors. Then, in ten years time, we’ll have a device with post-classical capabilities and everyone will rejoice.

Here’s the catch. First, unlike many other computational problems, the output to Boson-sampling cannot even be efficiently verified (unlike, say, Shor’s algorithm which resides in NP). That is, if I tell you the output statistics that I measured, there is no way of determining whether the output was correct. So how will we ever know that our computer is working correctly? One approach (the one used by Broome et al.) is, instead of verifying the output to the computer, we characterise the unitary network defining the transformation. This can be implemented efficiently (see work by Rahimi-Keshari et al. here), but works off the assumption that the network is purely unitary, which, in general, is not the case (e.g. as a result of effects like mode-mismatch and higher order photon number terms). Second, there are no known algorithmic applications for Boson-sampling. Aaronson & Arkhipov present a strong computational complexity argument that Boson-sampling is classically hard to simulate, but, at this stage, no one has actually mapped it to solving a problem of interest. This is in distinction to universal quantum computation whereby a plethora of algorithms, most famously Shor’s integer factorisation algorithm, are known.

Nonetheless, I fully expect that this is the direction optical quantum computing will (and should) follow in the near future – one Science paper after another, each adding a few more photons and a bunch more modes.

It’s exciting, but, based on present understanding, completely and utterly useless. Nonetheless, I look forward to seeing how far this concept can be pushed, and I hope that in the meantime the theorists will come up with a killer application for these devices (I personally don’t know how to do this – I’m not sure that anyone does).

On a positive note, my recent work suggests that Boson-sampling may still implement a hard algorithm even in the presence of very high loss rates (here) and very low photon fidelities and spectral purities (here). This is in contrast to universal optical quantum computing schemes, which place very stringent requirements on these effects (the best known fault tolerant algorithms still require error rates below 1%). This means that Boson-sampling may be much more viable with present-day technology than other optical quantum computing schemes.

I look forward to future demonstrations with a couple of dozen photons, at which point a reasonable claim can be made that we have a device with post-classical capabilities (albeit still not very useful).

3 thoughts on “My prediction for the next 10 years of linear optics quantum computing”

  1. Since boson-sampling computing is in its infancy, it remains uncertain whether these computers can solve problems beyond boson sampling. Still, this research suggests that computers based on quantum physics could indeed tackle problems beyond the reach of classical computers.

  2. The state of a three-qubit quantum computer is similarly described by an eight-dimensional vector (a,b,c,d,e,f,g,h), called a ket . However, instead of adding to one, the sum of the squares of the coefficient magnitudes, , must equal one. Moreover, the coefficients can have complex values . Since the absolute square of these complex-valued coefficients denote probability amplitudes of given states, the phase between any two coefficients (states) represents a meaningful parameter, which presents a fundamental difference between quantum computing and probabilistic classical computing.

  3. The first is that boson sampling isn’t a universal computational machine – perhaps a specialized machine is allowed to be faster than a classical computer. The “no-go” theorem could be of the form that no quantum Turing machine can deliver results beyond a classical Turing machine. There is also the small matter of proving that a classical machine cannot do the boson sampling calculation as fast as the “real thing”.

Leave a Reply