New paper: Non-deterministic approximation of photon number discriminating detectors using non-discriminating detectors

Quantum computing is one of those mysterious fields that most people don't have any familiarity with at all. This problem is excaccerbated by the fact that the literature in this field is typically very technical and out of reach to non-specialists. In fact, even I, as a student in the field, find it extremely difficult to follow the literature. Consequently, I've decided that whenever I publish a new paper, I'll complement it with a layman's description of the paper in this blog, hopefully in a form pallateable to those who have never even heard of the word physics before.

Several months ago my first physics paper was accepted for publication in the Journal of Optics B., entitled "Non-deterministic approximation of photon number discriminating detectors using non-discriminating detectors" (e-print quant-ph/0411114). This is quite a mouthful I confess, however the idea behind the paper is very simple and I'll now attempt to explain it in as non-technical a manner as possible.

In optical quantum computing (see my previous post for an introduction), my area of research, we use photons, or particles of light, to represent qubits. One of the most fundamental requirements to experimentally realize optical quantum gates is photo-detectors. A photo-detector is simply a device which tells you when a photon hits it. However, there are two types of photo-detectors: discriminating (or number-resolving) and non-discriminating. A discriminating photo-detector is one which is able to tell you how many photons have hit it, whereas a non-discriminating detector can only tell you if one or more photons hit it, but not how many. In the long term, if we're going to able to construct optical quantum computers, it turns out that we'll need the former discriminating variety. However, building discriminating photo-detectors is very technologically challenging and presently we only have reliable non-discriminating ones. In this paper I describe how non-discriminating detectors can be used to approximate discriminating detectors. The proposal is non-deterministic, which simply means that the device doesn't always work, but when it does, you can be very confident that it has worked correctly. Because the proposal in non-deterministic, it's application is quite limited. However, there are still many experiments for which the proposal may be applicable.

My proposal describes a class of detectors which do the following. We have some unknown number of photons n, hitting our detector, and we want to know if the number of incident photons is m. If n=m our detector gives a response, otherwise it does not.

I'll begin by explaining the simplest case, an m=1 detector. That is, a detector which tells us if there is exactly one incident photon. There are only two basic components we need to do this: a beamsplitter and two non-discriminating detectors. A beamsplitter is simply a device which, when a photon hits it, has some probability P, of going one way, and some other probability 1-P, of going the other. To construct our m=1 detector we will simply direct the incoming beam onto a very low reflectivity beamsplitter (i.e. very small P). At each of the beamsplitter outputs we place a non-discriminating detector. Now imagine that n=1, i.e. a single photon is incident on the beamsplitter. The photon has probability P of hitting the first detector, and probability 1-P of hitting the second detector. If we have two incident photons (i.e n=2) then the probability that both photons reach the first detector is P squared, and the rest of the time at least one of the photons will reach the second detector. In general, for an arbitary n-photon input state, the probability that all photons reach the first detector drops of exponentially as P to the power of n. Recall that P is chosen to be very small. Therefore, P squared is much smaller than P. This realization forms the basis of the proposal. We apply a so-called post-selection procedure. Namely, if the first photo-detector triggers, but the second does not, we can say with high probability that exactly one photon was incident on the beamsplitter, since the probability that multiple photons are all directed to the first detector is extremely low. If the second detector does trigger, we ignore the result as a failure. This is where the non-determinism comes in. So we see that using a low-reflectivity beamsplitter and two non-discriminating detectors we can construct a device which fails most of the time, but when it succeeds, tells us that we almost certainly had exactly one photon at the input.

The idea I just described for an m=1 detector can be generalized to construct arbitrary m-photon discriminating detectors, by using a chain of m beamsplitters instead of just one. In principle the idea works very well, however there is one experimental challenge in particular which limits its effectiveness. Specifically, experimental photo-detectors are inefficient, which means they don't always click when a photon hits them. Let's consider what this means in the context of the m=1 detector I described earlier. We said that when the first detector clicks, but the second does not, we can be fairly sure that we had exactly one photon in our input. Suppose, however, that the detectors are ineffecicient. This means that a photon might hit the second detector, but not trigger it. This means that we may have in fact had two incident photons, but we simply missed one of them. Clearly this makes the results erroneous. This problem of having inefficient detectors is actually quite signficant since present-day detectors are really quite inefficient. Nonetheless, with the future in mind, when highly efficienty detectors may be available, the proposal may find a use.

Throat singing adventures

About two years ago I became interested in 'throat singing', an unusual form of singing, traditionally perfomed in Tuva, Tibet, Mongolia and several other places. Throat singing differs from conventional singing in that the singer controls the overtones in the voice rather than the fundamental, which is done by varying the shape of the mouth and position of the tongue. The voice box itself is only ever producing a single sound, a low pitch drone. The cavity of the mouth then acts as a bandpass filter, which is selectively tuned to different frequencies emanating from the voice box. In order for this to work effectively the voice box must produce a rich spectal profile, i.e. have a significant component of overtones, otherwise there won't be anything to tune into. This is achieved by droning with a very constricted throat, which makes a gruff, and therefore spectrally rich, tone.

Naturally, I was curious to try it for myself, so I tracked down some instructions and started practising. Although experienced throat singers can produce incredible harmonics using just their mouth, I have found that the effects can be enhanced enormously by practising in the right places. In particular, bathrooms, concrete stairwells, underground carparks and caves have particularly good resonance, and without much practise an amateur like myself can soon have an entire room ringing and buzzing using just their voice. It's a very exhilarating experience. Even within a particular environment, like a bathroom, different locations can have completely different resonant characteristics. For example, I've found that under the shower I get particularly good resosance when I stand directly over the drain pipe, facing down. At the right pitches this sets up a standing wave in the pipe, much like inside a flute or other woodwind instrument. Also very effective is facing into the corner of the room.

The moral of the story is that if you're ever walking down the stairs and the whole thing starts sounding like someone rang a churchbell, don't be too concerned, it's probalbly just some innocent throat singer wanna-be, like myself, trying to squeeze in a bit of practise.

If you're interested in hearing what throat singing sounds like, there are heaps of free MP3's available for download. Also, Scientific American have an interesting article on Tuvan throat singing.

Mountaineering in New Zealand

I just returned from my long anticipated mountaineering trip to New Zealand, where I spent four weeks tramping and climbing in the Mt. Cook area. I initally took a Technical Mountaineering Course (TMC) with Alpine Guides, an excellent 10 day course on mountaineering technique, which included nights spent in snow caves, falling into crevasses (not intentionally I might add), climbing peaks and putting on several kilos from eating so much. If you're interested in getting into the highly addictive sport of mountaineering, I can highly recommend this course. After the TMC I hired a guide for a week to attempt New Zealands highest peak, Mt. Cook (3754m). Unfortunately the expedition was unsuccessful due to unsafe snow conditions. By moonlight and headtorch we climbed about half-way up the mountain, via the infamous Linda Glacier, only to find ourselves on extremely high-risk avalanche terrain, from where we could not safely proceed. Bad luck I know, but that's all part and parcel of mountaineering. We also spent a couple of days rock-climbing in the Lake Wanaka area, which was also great fun. As much fun as mountaineering is, it's about as hard on the pocket as recreation can get. So before you take it up, ask yourself if you're really willing to be perpetually broke. Personally, I wonder whether I should have taken up knitting instead. I know it would have made my mother happier.
Ice climbing

More on Moore’s Law

Most people will have heard of Moore's Law, that the number of transistors in microprocessors doubles every two years. For example, in 1970, Intel's original microprocessor, the 4004, had a few thousand transistors. Today, state of the art processors have on the order of half a billion transistors. While these statistics are certainly very interesting, there are some other laws regarding the development of semiconductor technology that most people are probably not so familiar with.

The semiconductor fabrication facilities where our Pentiums, Athlons and PowerPC's are maufactured are, as you can probably imagine, not especially cheap. In fact, the cost of fabrication plants is increasing exponentially, much like Moore's Law for the number of transistors on a chip. In 1970 it cost about $6 million to build a state of the art fabrication plant, today it costs a few billion, and by 2010 it is estimated to be on the order of $18 billion. That's a lot of money for just one fabrication plant. Even today, the costs are so extreme that very few companies have the capital to build such plants. I won't try and predict what implications this will have for the future of semiconductor technology, but it's certainly an amazing, if not alarming, trend.

Manufacturing transitors, however, is just one half of the process. The other half is actually designing the circuits that go onto chips such that all of those hundreds of millions of transistors do something useful. To do this, today's digital systems designers employ very high-tech software which automates most of the development cycle. The capacity of these tools to design large integrated circuits is also increasing exponentially. By this I mean the number of transistors which they can design into a circuit. Despite growing exponentially, however, the capacity of software tools is growing at a smaller exponential rate than the number of transistors that fit onto a chip. In other words, software design tools are not keeping up with what the fabrication plants are capable of. What this means is that although we have an enormous capacity for squeezing transistors onto chips, we keep falling further and further behind in trying to design useful circuits that fully make use of so many transistors. Fortunately that's not the end of the story. Instead of putting transistors to waste, systems designers are overcoming this problem by changing their design paradigms. For example, we are beginning to see the emergence of multi-core microprocessors, which means that the designer puts multiple smaller microprocessors into a 'chip' rather than designing a single larger one. By doing this, the designer isn't as limited by software and can still make full use of the available transistors. Trends like this are likely to represent the future of microprocessors and it probably won't be long before all of our PC's have many processor cores under the hood.

Sources: Intel Corp., IC Knowledge, seminar by Prof. Jan Rabaey (2003)

5 minutes of political fun

If you've got five minutes to spare, try the World's Smallest Political Quiz, made by the well known Libertarian organization, the Advocates for Self-Government. The quiz asks 10 multiple choice questions and gives you a graphical representation of how they rate your political persuasion, on a scale of Conservative, Statist, Liberal and Libertarian. Needless to say, as with all such quizzes, especially political ones, the results should be taken with a salt mine. Nonetheless, it's a good way to get people interested in political issues and get the conversations rolling.

Here's my result:
Political quiz result

Exploring the world

Recently some of my friends in the physics department drew my attention to one of the coolest programs I've ever seen. The program, World Wind, by NASA, allows you to interactively view the Earth. The program connects to the NASA server, which contains a massive, multi-terabyte database of satellite imagery, which feeds into the program. The result is that you can hold the world in your hand, spin it around, and zoom in all the way down to an incredible 15 metre resolution. I should emphasize that the entire Earth is mapped in the database. In addition to the satellite imagery, the database includes topographical data and can use it to reconstruct elevation profiles. There are also facilities to superimpose city and suburb names, national borders, and even animations of weather patterns. Be warned, however, the program is big (around 250Mb) and you'll be needing a pretty speedy internet connection to get data from the NASA servers (which have been down a lot of the time due to overload). Unfortunately, the program is only available for Windows.

Below is a snapshot from World Wind, taken looking over Mt. Cook on the south island of New Zealand, where I'll be climbing in a few weeks time. For this picture I enabled elevation profiling to reconstruct a 3D view. A pretty remarkable program, and completely free!
World Wind over Mt. Cook, New Zealand

The future of the US economy

This post was inspired by, and refers to, an article I read by CNN on the US Congress' recent passing of a bill to increase the federal borrowing limit by $800 billion.

As one of his first actions since re-election this month, President Bush has increased the federal borrowing limit from $7.38 trillion to $8.18 trillion. The cap places a legal limit on much debt the government can amass, and, as a result of Bush's ongoing record deficits, which are rapidly approaching the half trillion dollar mark, the existing cap has been reached. This raises some very serious concerns for both the US economy and the world economy as a whole.

Ordinarily, running a significant deficit on a temporary basis would not raise too much concern. In fact, many economists argue that during periods of economic stagnation, which has recently been the case, this is a favourable approach, sometimes referred to as the Keynesian approach (after the famous economist John Maynard Keynes). This line of thought argues that increased government spending and tax reduction (i.e. deficits) act to stimulate the cycle of spending and consumption, thereby creating jobs and lifting productivity. In other words, large scale spending can serve to kick start a stagnant economy. However, the present situation is a very different one. The massive spending engaged in by Bush does not represent one-off spending. Instead, it comes about as a result of fundamental structural changes which imply that the magnitude of deficits will be on-going, and, in fact, probably increase further. Increased military spending complimented by on-going military commitments in Iraq and Afghanistan, the problem of an ageing population, record high oil prices, and the intention to make permanent, income tax cuts offered by Bush in his first term, all contribute to this expectation.

By my calculations, if deficits continue to be of the same magnitude as they are now, then in about 2 years time Congress will have to increase the borrowing limit again to avoid defaulting on its debt (which would have absolutely catastrophic consequences). Clearly we have a situation that is not sustainable. At $8.18 trillion, the federal borrowing limit represents about 70% of the GDP of the Unites States. If the government continues to adopt the band-aid solution of simply increasing borrowing limits whenever they hit the limit then eventually paying off the interest on the debt will become untenable and large scale spending cuts and/or tax increases will be necessary. The approach presently adopted is akin to a credit card user hitting his credit limit, and then rather than paying off the credit, increasing the credit limit further to go on another shopping spree. It may work fine for while, but eventually, and inevitably, the level of debt becomes unsustainable. Clearly we all hope the US economy will not reach such a stage. However, unfortunately this is a very real concern and one which would have very serious implications. From the Australian perspective, things would be very bleak indeed, since Australia's economy is both highly dependent upon foreign direct investment, which comes, to a large extent, from the US, and foreign export markets, of which the US is one of our largest.

Mandelbrot sets

Today I was reading an interview on New Scientist with Benoit Mandelbrot, a famous mathematician and founder of a branch of mathematics known as fractal geometry. Almost everybody will recognize a picture of the so-called Mandelbrot set (below), but probably very few are familiar with exactly what it is or how it comes about. In this post I hope to provide a very layman's description of how the Mandelbrot set is constructed. First, however, I must explain what a complex number is, something which some will be familiar with, but others not.

The normal numbers that we deal with every day are the so-called real numbers and can be represented on the number line, which ranges from negative infinity in one direction to positive infinity in the other. Real numbers suffice for most things that we'll ever want to do with numbers, however there are some limitations on what can be done with real numbers. In particular, it had long bothered mathematicians that there was no solution to the simple question 'what is the square root of -1?'. In fact, using real numbers there simply is no answer to this question (let me know if you think of one). In recognition that there is no numerical answer to this question, mathematicians decided to simply give it a label. And so the square root of -1 was labeled i. What's the point you ask? Well the point is that by introducing this label we can proceed to solve problems which we otherwise would have thrown out the window. A complex number is simply a number which contains a real component, which we'll call a, and some multiple of i, which we'll call b, often referred to as the imaginary component. Technically, we can say that a complex number can be expressed in the form a+ib, where a and b are both real numbers. Unlike the real numbers, which can be represented on the number line, a complex number is 2-dimensional and must instead be expressed on a plane, called the complex plane, where the real component is one axis of the plane and the imaginary component the other. We can do with complex numbers all the sorts of things that we can do with ordinary numbers, like addition, subtraction, multiplication and division, all using standard algebraic rules.

The Mandelbrot set is constructed in the complex plane. We start by choosing a point on the complex plane (i.e. a complex number), which we label c. We introduce another complex number, z, which we initially set to zero. We then repeatedely apply the following simple rule: we let the new value of z be the square of the old value of z, plus c. After each iteration of applying this rule we calculate the magnitude of z, which is simply the distance from the zero-point on the complex plane (i.e. 0+0i) to z. We then compare the magnitude of z to some arbitrary threshold. We say that the depth of the complex number c into the set is how many times the rule can be applied before z exceeds the threshold. Finally we assign a colour or shade of gray to the point c on the complex plane according to its depth. This procedure is repeated for every point on the complex plane. So in summary, a picture of the Mandelbrot set is just a coloured representation of the depth of points on the complex plane into the set.

Despite being described by such an incredibly simple rule, the Mandelbrot set in incredibly complicated. In fact it is infinitely complicated. If we zoom in on a region of the Mandelbrot set things don't begin to flatten out or become regular. Instead the phenomenal beauty and complexity continues. We notice many familiar patterns emerging, most notably the 'bug' silhouette appearing over and over again. Nonetheless, the pattern is not repetitive. We can zoom in, quite literally forever, and never see any repetition or reduction in complexity. This is a generic property of fractals, of which the Mandelbrot set is one. It may seem that the Mandelbrot set, while very attractive, is fairly useless, which is most likely the case. However, this is not the case for fractals in general, which can be used to describe many real world phenomena. For example, geographic formations, growth of biological organisms such as the patterns on sea shells and fern leaves, and even the dynamics of financial markets can all be described by fractals.

Mandelbrot set

So what is it that I do? – Part 3: Linear Optics Quantum Computing

In my previous post I discussed quantum computing and why it has the potential to be more powerful than classical computing. Until now however I've only discussed quantum computing in a very abstract sense. I've talked about quantum gates and qubits without giving any clue as to how these things are made in practise. Clearly this is pivotal, since otherwise quantum computing would be nothing more than a cool theory without much of a future. Once again I'll discuss the classical case as a starting point. In classical computers, as everyone is probably aware, bits are represented using electrical signals. For example, the presence of an electrical current could represent a logical '1' and its absense a logical '0'. Using electrical signals has the advantage that we can use electronic circuits consisting of transistors to implement gates. So what about the quantum case? It turns out that in fact there are countless ways in which qubits can be represented and quantum gates constructed.

In my research I focus on one idea in particular, referred to as linear optics quantum computing, or LOQC. In LOQC we use photons, or particles of light, to represent qubits. Most of us will be familiar with the idea of polarization of light from high school experiments. For those not familar with polarization, it can simply be thought of as the orientation of light, which must always be perpendicular to the direction in which the light travels. As a crude analogy, image a plane flying along which then rolls to one side. This would be changing the plane's polarization. Using the polarization of a photon we can very easily represent a qubit. We simply define that if a photon is polarized along say a horizontal axis that it represents a '0', and a vertical axis a logical '1'. Naturally, the all important superpositions are possible. For example if a photon were polarized at 45 degrees to the horizontal, this would be an equal superposition of horizontal and vertical, or '0' and '1'.

So we have our qubits. What about constructing quantum gates? In my previous post I talked about the CNOT gate, a 2-qubit quantum gate. The picture below is a schematic for an implementation of this gate using LOQC. It isn't terribly important, or instructive, to understand exactly how it works. I really only want to draw your attention to two features. Firstly the black lines. These represent paths which the light (i.e. photons) take. Secondly, the horizontal bars. These are beamsplitters, and are nothing other than partially silvered mirrors which split a light beam into multiple weaker beams. The beamsplitters serve to interfere the photons in the circuit in a cleverly designed way such that the device implements a CNOT gate.

As this point in time, things are probably looking very bright for LOQC. Unfortunately however there are some pretty hideous problems with the gate I've just described. Most notably, the gate is non-determinisitc, which simply means that it has some definite probability of failing. It's probably immediately obvious just what a problem this presents. If we only need to implement a single gate, and we have one which succeeds with say probability P, there are no worries. We simply repeat the gate over and over until it works. In practise however we'll be wanting to make complicated circuits consisting of perhaps thousands of these gates. In this case the probability of the circuit working is P to the power the number of gates in the circuit. In other words, we could be waiting a million years for the circuit to work. Fortunately, there are some extremely clever proposals on how this problem can be circumvented. Unfortunately, they are also quite complicated and experimentally very difficult.

In conclusion, we know that in theory LOQC can work. In fact, the gate shown below has been experimentally demonstrated at the University of Queensland Quantum Technology Lab. In practise however there are many technological obstacles to overcome before any useful quantum computing algorithms can be implemented using LOQC.

The Neo-Modern Humanist