Comment on “Error tolerance and tradeoffs in loss- and failure-tolerant quantum computing schemes”

This post is kindly contributed by Sean Barrett, commenting on my recent paper “Error tolerance and tradeoffs in loss- and failure-tolerant quantum computing schemes”. Warning: this is a technical post.

Hi Peter,

I think you make some interesting points in this paper, which are well worth bearing in mind for anyone developing error tolerant schemes for quantum computation.

I just have a few comments.

You use the somewhat pejorative term “exponential blow out” to describe how the dephasing noise is amplified in schemes where the gates are non deterministic, but fail in a heralded way – in particular you argue that this is as a result of the extra measurements that must be performed to remove redundant qubits from the cluster. You say that the effective error teleported to the remaining qubit is

peff = 1 – (1-perr)^n

where peff is the effective error rate on the remaining unmeasured qubit, perr is the physical depolarising error per qubit, and n is the number of redundant qubits that must be removed.

There’s a couple of things to say about this equation. Firstly, it’s not immediately clear to me that this applies to all graph states, or just to GHZ-like graphs in which one qubit is connected to all the others. It would be nice to see a more explict argument for how errors are “teleported” in linear (1-d) cluster states, for example. One other point in this regard is that the paper Phys. Rev. A 73, 012304 (2006), we give a procedure for building 2-d cluster states, which is quite different from the one you analyse (I think the difference ultimately stems from the fact that Duan and Raussendorf have different physical assumptions about what happens when their gates fail). I would be interested to know how the errors accumulate using this method (which is also quite similar to the one given by Browne and Rudolph in Phys. Rev. Lett. 95, 010501 (2005)).

Secondly, your claim of an “exponential blow out” is based on the fact that the “overhead cost” n appears in the exponent of this expression. Later, you point out that n is proptional to 1/pgate, where pgate is the success probability of performing a single 2-qubit gate. What I want to point out is that your expression for peff is exactly what one would expect for the total probability of obtaining at least one error after n Bernoulli trials of a process where the error probability is *fixed* for each trial. This is a very common situation, for example, if I buy a lottery ticket each week for n weeks, the total probability of me winning the jackpot at least once will be given by your expression if I interpret perr as the probability of my winning it on any given week. In this situation, is it reasonable to say that there is an “exponential blowout” in the probability of my winning the lottery jackpot at least once? Only if I keep buying tickets for millions of weeks. For any shorter timescale, most people would reasonably describe my total chances as increasing
*linearly* with n.

A little bit of manipulation on your expression shows that (assuming
perr< <1) we can write it as

peff = 1-exp( -perr*n)

Now if we take n = A/pgate, (which you argue is the scaling for the particular cluster-growing scheme you discuss), where A is some constant factor, it’s clear that the error really grows linearly with 1/pgate, and the exponential behaviour only kicks in when A/pgate ~= perr. If A is a constant of order 1, what this says is that we only have a significant problem if the success probability for the gate is the same order of magnitude as the depolarizing error probability.

For larger values of pgate, what we have is linear scaling in 1/pgate. This is not so surprising, although it is still perhaps something that should be an important consideration in a particular implementation using non-deterministic gates. But it’s not really a problem that’s specific to these sorts of schemes. Indeed, in most implementations that have been proposed, that I am aware of, there is some kind of time or space overhead relative the the idealised circuit model quantum computer. For instance, many implementations are retricted to nearest-neighbour qubit interactions. In this case, to perform a gate between an arbitrary pair of qubits in the computer, they must be brought close together, either using SWAP operations on fixed qubits, or, as in the case of ion trap experiments, physically moving the qubit-carrying entities around. Clearly, there’s a time overhead in doing either of these, that’s proportional to the physical size of the computer. While we’re waiting for the qubits to be moved around, they will be accumulating depolarizing errors due to the interaction with the environment. One could also argue that this overhead ultimately leads to an “exponential blow out” in the depoloarizing noise, although I would prefer not to be so pessimistic!

I also wanted to make a couple of points about the distributed QC schemes I’ve been involved with, e.g. ref [5] of your paper, and Phys. Rev. A 73, 012304 (2006), which are perhaps relevant in the light of your work. In these schemes we have a distributed architecture, in which qubits comprise of trapped atoms, ions, quantum dots, or some other solid state gadget. Each qubit sits a long way from the others, so that it can be easily controlled by local control fields, and so that the decoherence processes can be well characterised and understood. We introduce entanglement between pairs of qubits using a single- (or few-) photon interference effect, which has this property that it does not always work (either because of photon loss, or in the case of our first paper, because the success probabiliy is inherently less than 1/2.) Now, of course this non-deterministic entangling scheme introduces overheads into our computing scheme, relative to one in which the gate operations are deterministic, and therefore may lead to the sort of error amplification you describe. But the hope is that there are other aspects of these schemes that more than compensate for this problem.

For instance, because we can engineer each qubit in isolation from the others, one can imagine that this makes the process of characterising decoherence and eliminating error processes easier (e.g. process tomography on a single isolated qubit is much easier than on a register containing thousands of qubits which may have residual interactions with oneanother). So we might be able to make perr very small indeed.

Furthermore, since we are not limited to nearest neighbour interactions, we don’t have a particular problem with the overhead involved in entangling distant qubits (although of course we do, in a sense, transmit entanglement via photons, so ultimately the finite speed of light will become a problem…).

Finally, because we are computing with graph/cluster states, there are some tricks we can do to reduce the “hidden overheads” inherent in many algorithms and protocols – e.g. we can potentially elimninate all the Clifford group parts of the algorithm by directly preparing appropriate graph states.

Cheers,
Sean

See the comments section below for follow up discussion on this

4 thoughts on “Comment on “Error tolerance and tradeoffs in loss- and failure-tolerant quantum computing schemes””

  1. Sean, thanks for your comments. I have a few thoughts regarding your main point.
    Your main criticism is that while our expression strictly speaking is exponential, in a low p_err regime it can be very well approximated as a linear function. This is certainly the case. In the long term, for scalable quantum computing to be possible we will need to operate in this low p_err regime since we will need to be under the fault tolerant threshold. I have three main points in response to this:
    1. State preparation techniques have applications beyond scalable quantum computing. For example, GHZ states can be produced using the sorts of divide-and-conquer techniques that are used for cluster states. These have applications in areas like quantum cryptography (for example quantum secret sharing protocols), and I’m sure plenty of other uses. In the short to medium term, experimental demonstrations of these kinds of protocols will most certainly not be in this extremely low p_err regime.
    2. In the near future experimentalists will no doubt begin performing in-principle demonstrations of these sorts of techniques. Again, these in-principle demonstrations will be a long shot from the error levels required for scalable fault tolerant quantum computing.
    3. Finally, and I think most importantly, whether we call the blow-out exponential or linear is, I think, more a debate over terminology than anything else. At the end of the day, the central claim still holds that if these kinds of loss-tolerant protocols are to be operated in a regime where they do not cause unacceptable magnification of noise rates, their loss-tolerance will be reduced by orders of magnitude.
    Regards,
    Peter.

  2. Anonymous, you make a very good point. Where IS the picture of Sean in a sarong? Care to answer that Sean? We have Sarongs and cameras waiting for you in our office.

  3. Hi again Sean,

    Having thought a little bit more about this issue of ‘exponential’ versus ‘linear blow-out’ I think some of my earlier points are a bit duff. In my previous comment I said that in the near future we will not be in the very low p_err regime and the exponential effects would therefore be more significant. While this is true, i.e. we would no longer be sitting in the linear region, for an exponential of this form the exponential region scales strictly more favourably than the linear region, since we are asymptotically approaching unity. That is, 1-(1-p)^n is really an upside down exponential decay, as opposed to exponential growth. So in my previous comment when I said “what if p_err is on the order of say 0.1 instead of 10^-3?”, that’s pretty irrelevant since the scaling is no worse than if p_err was 10^-3. I therefore would officially like to transfer points (1) and (2) to the ‘derrrr’ bin. Nonetheless, I still believe in point (3). I can see your point that the term ‘exponential blow-out’ has certain connotations. I think the real potential misunderstanding here is that we’re talking about a probability, which is bounded between 0 and 1, so ‘exponential blow-out’ has a fundamentally different behaviour than what we usually mean. However, I think it’s probably safe to assume that readers of this article will be aware of this. Still, this is a good point and something I’ll keep in mind for future revisions.

Leave a Reply