Category Archives: Physics

New paper: Scalable boson-sampling with time-bin encoding using a loop-based architecture

Full text here.

We present an architecture for arbitrarily scalable boson-sampling using two nested fiber loops. The architecture has fixed experimental complexity, irrespective of the size of the desired interferometer, whose scale is limited only by fiber and switch loss rates. The architecture employs time-bin encoding, whereby the incident photons form a pulse train, which enters the loops. Dynamically controlled loop coupling ratios allow the construction of the arbitrary linear optics interferometers required for boson-sampling. The architecture employs only a single point of interference and may thus be easier to stabilize than other approaches. The scheme has polynomial complexity and could be realized using demonstrated present-day technologies.

Comment: Will boson-sampling ever disprove the Extended Church-Turing thesis?

Full text here.

Boson-sampling is a highly simplified, but non-universal, approach to implementing optical quantum computation. It was shown by Aaronson & Arkhipov that this protocol cannot be efficiently classically simulated unless the polynomial hierarchy collapses, which would be a shocking result in computational complexity theory. Based on this, numerous authors have made the claim that experimental boson-sampling would provide evidence against, or disprove, the Extended Church-Turing thesis -- that any physically realisable system can be efficiently simulated on a Turing machine. We argue against this claim on the basis that, under a general, physically realistic independent error model, boson-sampling does not implement a provably hard computational problem in the asymptotic limit of large systems.

New paper: Boson sampling with photon-added coherent states

Full paper here.

Boson sampling is a simple and experimentally viable model for non-universal linear optics quantum computing. Boson sampling has been shown to implement a classically hard algorithm when fed with single photons. This raises the question as to whether there are other quantum states of light that implement similarly computationally complex problems. We consider a class of continuous variable states - photon added coherent states - and demonstrate their computational complexity when evolved using linear optical networks and measured using photodetection. We find that, provided the coherent state amplitudes are upper bounded by an inverse polynomial in the size of the system, the sampling problem remains computationally hard.

New paper: Self-avoiding quantum walks

Full paper here.

Quantum walks exhibit many unique characteristics compared to classical random walks. In the classical setting, self-avoiding random walks have been studied as a variation on the usual classical random walk. Classical self-avoiding random walks have found numerous applications, most notably in the modeling of protein folding. We consider the analogous problem in the quantum setting. We complement a quantum walk with a memory register that records where the walker has previously resided. The walker is then able to avoid returning back to previously visited sites. We parameterise the strength of the memory recording and the strength of the memory back-action on the walker's motion, and investigate their effect on the dynamics of the walk. We find that by manipulating these parameters the walk can be made to reproduce ideal quantum or classical random walk statistics, or a plethora of more elaborate diffusive phenomena. In some parameter regimes we observe a close correspondence between classical self-avoiding random walks and the quantum self-avoiding walk.

New paper: Quantum random walks on congested lattices

Full paper here.

We consider quantum random walks on congested lattices and contrast them to classical random walks. Congestion is modelled with lattices that contain static defects which reverse the walker's direction. We implement a dephasing process after each step which allows us to smoothly interpolate between classical and quantum random walkers as well as study the effect of dephasing on the quantum walk. Our key results show that a quantum walker escapes a finite boundary dramatically faster than a classical walker and that this advantage remains in the presence of heavily congested lattices. Also, we observe that a quantum walker is extremely sensitive to our model of dephasing.

New paper: Sampling generalized cat states with linear optics is probably hard

Full paper here.

Boson-sampling has been presented as a simplified model for linear optics quantum computing. In the boson-sampling model, Fock states are passed through a linear optics network and sampled via number-resolved photodetection. It has been shown that this sampling problem likely cannot be efficiently classically simulated. This raises the question as to whether there are other quantum states of light for which the equivalent sampling problem is also computationally hard. We present evidence that a very broad class of quantum states of light - arbitrary superpositions of two or more coherent states - when evolved via passive linear optics and sampled with number-resolved photodetection, likely implements a classically hard sampling problem.

New paper: Spontaneous parametric down-conversion photon sources are scalable in the asymptotic limit for boson-sampling

Full paper here.

Boson-sampling has emerged as a promising avenue towards post-classical optical quantum computation, and numerous elementary demonstrations have recently been performed. Spontaneous parametric down-conversion is the mainstay for single-photon state preparation, the technique employed in most optical quantum information processing implementations to-date. Here we present a simple architecture for boson-sampling based on multiplexed parameteric down-conversion and demonstrate that the architecture is limited only by the post-selected detection efficiency. That is, given that detection efficiencies are sufficiently high to enable post-selection, photon-number errors in the down-converters are sufficiently low as to guarantee correct boson-sampling most of the time. Thus, we show that parametric down-conversion sources will not present a bottleneck for future boson-sampling implementations. Rather, photodetection efficiency is the limiting factor and thus future implementations may continue to employ down-conversion sources.

My prediction for the next 10 years of linear optics quantum computing

People often ask me how long it will be before we have an optical quantum computer with post-classical capabilities. My answer, until recently, was that it will be a very, very long time. However, a recent landmark theoretical study by Aaronson & Arkhipov (here) suggests that the timeline could be much shorter than previously expected. They presented a model for optical quantum computing called 'Boson-sampling', which significantly relaxes the requirements for optical quantum computing, yet nonetheless likely implements an algorithm which cannot be efficiently classically simulated. From a quantum computing perspective this is exactly what we want - a computer that no classical computer in the world can efficiently simulate.

The model is very simple. We prepare a bunch of single photons, pass them through a passive linear optics network (i.e beamsplitters and phase shifters - no feedforward, no quantum memory, no post-selection), and then measure the photo-statistics at the end. Importantly, while implementing a classically hard problem, the model is (strongly) not believed to be universal for quantum computing.

Recently, numerous experimental groups have begun implementing elementary demonstrations of Boson-sampling. In the last few weeks there have been no less than four experimental demonstrations of three photon Boson-sampling (here, here, here and here). The demonstration by Broome et al. (here) was recently published in Science.

Since full blown optical quantum computing is a very long term goal, here's my prediction for the next ten years.

In 2014 we'll see another Science paper demonstrating four photon Boson-sampling. In 2015 there will be another Science paper with five photons. The following year we'll see - here's a scoop - a six photon Boson-sampling paper, published in Science of course. And this will continue, on and on, until the experimentalists run out of single photon sources and photo-detectors. Then, in ten years time, we'll have a device with post-classical capabilities and everyone will rejoice.

Here's the catch. First, unlike many other computational problems, the output to Boson-sampling cannot even be efficiently verified (unlike, say, Shor's algorithm which resides in NP). That is, if I tell you the output statistics that I measured, there is no way of determining whether the output was correct. So how will we ever know that our computer is working correctly? One approach (the one used by Broome et al.) is, instead of verifying the output to the computer, we characterise the unitary network defining the transformation. This can be implemented efficiently (see work by Rahimi-Keshari et al. here), but works off the assumption that the network is purely unitary, which, in general, is not the case (e.g. as a result of effects like mode-mismatch and higher order photon number terms). Second, there are no known algorithmic applications for Boson-sampling. Aaronson & Arkhipov present a strong computational complexity argument that Boson-sampling is classically hard to simulate, but, at this stage, no one has actually mapped it to solving a problem of interest. This is in distinction to universal quantum computation whereby a plethora of algorithms, most famously Shor's integer factorisation algorithm, are known.

Nonetheless, I fully expect that this is the direction optical quantum computing will (and should) follow in the near future - one Science paper after another, each adding a few more photons and a bunch more modes.

It's exciting, but, based on present understanding, completely and utterly useless. Nonetheless, I look forward to seeing how far this concept can be pushed, and I hope that in the meantime the theorists will come up with a killer application for these devices (I personally don't know how to do this - I'm not sure that anyone does).

On a positive note, my recent work suggests that Boson-sampling may still implement a hard algorithm even in the presence of very high loss rates (here) and very low photon fidelities and spectral purities (here). This is in contrast to universal optical quantum computing schemes, which place very stringent requirements on these effects (the best known fault tolerant algorithms still require error rates below 1%). This means that Boson-sampling may be much more viable with present-day technology than other optical quantum computing schemes.

I look forward to future demonstrations with a couple of dozen photons, at which point a reasonable claim can be made that we have a device with post-classical capabilities (albeit still not very useful).

New paper: Quantum walks with memory – goldfish, elephants and wise old men

Full article here.

by Peter P. Rohde, Gavin K. Brennen, Alexei Gilchrist

Quantum walks have emerged as an interesting approach to quantum information processing, exhibiting many unique properties compared to the analogous classical random walk. Here we introduce a model for a discrete-time quantum walk with memory by endowing the walker with multiple recycled coins and using a physical memory function via a history dependent coin flip. By numerical simulation we observe several phenomena. First in one dimension, walkers with memory have persistent quantum ballistic speed up over classical walks just as found in previous studies of multi-coined walks with trivial memory function. However, measurement of the multi-coin state can dramatically shift the mean of the spatial distribution. Second, we consider spatial entanglement in a two-dimensional quantum walk with memory and find that memory destroys entanglement between the spatial dimensions, even when entangling coins are employed. Finally, we explore behaviour in the presence of spatial randomness and find that in contrast to single coined walks, multi-coined walks do not localise and in fact a memory function can speed up the walk relative to a fully decohered multi-coin walker with trivial memory. We explicitly show how to construct linear optics circuits implementing the walks, and discuss prospects for classical simulation.

New paper: The information capacity of a single photon

Full article here.

by Peter P. Rohde, Joseph F. Fitzsimons, Alexei Gilchrist.

Quantum states of light are the obvious choice for communicating quantum information. To date, encoding information into the polarisation states of single photons has been widely used as these states form an natural closed two state qubit. However, photons are able to encode much more -- in principle infinite -- information via the continuous spatio-temporal degrees of freedom. Here we consider the information capacity of an optical quantum channel, such as an optical fibre, where a spectrally encoded single photon is the means of communication. We use the Holevo bound to calculate an upper bound on the channel capacity, and relate this to the spectral encoding basis and the spectral properties of the channel. Further, we derive analytic bounds on the capacity of such channels, and in the case of a symmetric two-state encoding calculate the exact capacity of the corresponding channel.