New paper: Upper bounds on the tradeoff between loss and error rates in non-degenerate quantum error correcting codes

This is a technical post

I’ve just posted my latest paper to the arXiv . In my previous paper we discussed the fact that several recent qubit loss and gate failure tolerant quantum error correction schemes come at the expense of a blow-out in depolarizing noise rates. In this paper I’ve generalized this notion and derived an upper bound on the tradeoff between qubit loss and depolarizing error rates for the class of non-degenerate quantum error correcting codes. I’ve approached this by modifying the well-known quantum Hamming bound, which enforces an upper bound on the number of depolarizing errors a non-degerate code of given length can correct for.

The standard quantum Hamming bound is

where a k qubit logical state is encoded into an n qubit codeword, and t depolarizing errors occur. The generalized bound I derive is

where t_l located (i.e. qubit loss) and t_u unlocated depolarizing errors occur. Roughly speaking, this result implies that an optimal non-degenerate code can correct against twice as many located as unlocated errors with a linear tradeoff between the two. It would be interesting to establish whether a class of codes exist which are able to saturate this bound. If so, I think such codes would be quite useful, and have broader applicability than codes which are tailored specifically to either located or unlocated errors.

Leave a Reply