All posts by Peter Rohde

Quantum computer scientist, Mountaineer, Adventurer, Composer, Musician, Public Speaker, DJ

5 minutes of political fun

If you’ve got five minutes to spare, try the World’s Smallest Political Quiz, made by the well known Libertarian organization, the Advocates for Self-Government. The quiz asks 10 multiple choice questions and gives you a graphical representation of how they rate your political persuasion, on a scale of Conservative, Statist, Liberal and Libertarian. Needless to say, as with all such quizzes, especially political ones, the results should be taken with a salt mine. Nonetheless, it’s a good way to get people interested in political issues and get the conversations rolling.

Here’s my result:
Political quiz result

Exploring the world

Recently some of my friends in the physics department drew my attention to one of the coolest programs I’ve ever seen. The program, World Wind, by NASA, allows you to interactively view the Earth. The program connects to the NASA server, which contains a massive, multi-terabyte database of satellite imagery, which feeds into the program. The result is that you can hold the world in your hand, spin it around, and zoom in all the way down to an incredible 15 metre resolution. I should emphasize that the entire Earth is mapped in the database. In addition to the satellite imagery, the database includes topographical data and can use it to reconstruct elevation profiles. There are also facilities to superimpose city and suburb names, national borders, and even animations of weather patterns. Be warned, however, the program is big (around 250Mb) and you’ll be needing a pretty speedy internet connection to get data from the NASA servers (which have been down a lot of the time due to overload). Unfortunately, the program is only available for Windows.

Below is a snapshot from World Wind, taken looking over Mt. Cook on the south island of New Zealand, where I’ll be climbing in a few weeks time. For this picture I enabled elevation profiling to reconstruct a 3D view. A pretty remarkable program, and completely free!
World Wind over Mt. Cook, New Zealand

The future of the US economy

This post was inspired by, and refers to, an article I read by CNN on the US Congress’ recent passing of a bill to increase the federal borrowing limit by $800 billion.

As one of his first actions since re-election this month, President Bush has increased the federal borrowing limit from $7.38 trillion to $8.18 trillion. The cap places a legal limit on much debt the government can amass, and, as a result of Bush’s ongoing record deficits, which are rapidly approaching the half trillion dollar mark, the existing cap has been reached. This raises some very serious concerns for both the US economy and the world economy as a whole.

Ordinarily, running a significant deficit on a temporary basis would not raise too much concern. In fact, many economists argue that during periods of economic stagnation, which has recently been the case, this is a favourable approach, sometimes referred to as the Keynesian approach (after the famous economist John Maynard Keynes). This line of thought argues that increased government spending and tax reduction (i.e. deficits) act to stimulate the cycle of spending and consumption, thereby creating jobs and lifting productivity. In other words, large scale spending can serve to kick start a stagnant economy. However, the present situation is a very different one. The massive spending engaged in by Bush does not represent one-off spending. Instead, it comes about as a result of fundamental structural changes which imply that the magnitude of deficits will be on-going, and, in fact, probably increase further. Increased military spending complimented by on-going military commitments in Iraq and Afghanistan, the problem of an ageing population, record high oil prices, and the intention to make permanent, income tax cuts offered by Bush in his first term, all contribute to this expectation.

By my calculations, if deficits continue to be of the same magnitude as they are now, then in about 2 years time Congress will have to increase the borrowing limit again to avoid defaulting on its debt (which would have absolutely catastrophic consequences). Clearly we have a situation that is not sustainable. At $8.18 trillion, the federal borrowing limit represents about 70% of the GDP of the Unites States. If the government continues to adopt the band-aid solution of simply increasing borrowing limits whenever they hit the limit then eventually paying off the interest on the debt will become untenable and large scale spending cuts and/or tax increases will be necessary. The approach presently adopted is akin to a credit card user hitting his credit limit, and then rather than paying off the credit, increasing the credit limit further to go on another shopping spree. It may work fine for while, but eventually, and inevitably, the level of debt becomes unsustainable. Clearly we all hope the US economy will not reach such a stage. However, unfortunately this is a very real concern and one which would have very serious implications. From the Australian perspective, things would be very bleak indeed, since Australia’s economy is both highly dependent upon foreign direct investment, which comes, to a large extent, from the US, and foreign export markets, of which the US is one of our largest.

Mandelbrot sets

Today I was reading an interview on New Scientist with Benoit Mandelbrot, a famous mathematician and founder of a branch of mathematics known as fractal geometry. Almost everybody will recognize a picture of the so-called Mandelbrot set (below), but probably very few are familiar with exactly what it is or how it comes about. In this post I hope to provide a very layman’s description of how the Mandelbrot set is constructed. First, however, I must explain what a complex number is, something which some will be familiar with, but others not.

The normal numbers that we deal with every day are the so-called real numbers and can be represented on the number line, which ranges from negative infinity in one direction to positive infinity in the other. Real numbers suffice for most things that we’ll ever want to do with numbers, however there are some limitations on what can be done with real numbers. In particular, it had long bothered mathematicians that there was no solution to the simple question ‘what is the square root of -1?‘. In fact, using real numbers there simply is no answer to this question (let me know if you think of one). In recognition that there is no numerical answer to this question, mathematicians decided to simply give it a label. And so the square root of -1 was labeled i. What’s the point you ask? Well the point is that by introducing this label we can proceed to solve problems which we otherwise would have thrown out the window. A complex number is simply a number which contains a real component, which we’ll call a, and some multiple of i, which we’ll call b, often referred to as the imaginary component. Technically, we can say that a complex number can be expressed in the form a+ib, where a and b are both real numbers. Unlike the real numbers, which can be represented on the number line, a complex number is 2-dimensional and must instead be expressed on a plane, called the complex plane, where the real component is one axis of the plane and the imaginary component the other. We can do with complex numbers all the sorts of things that we can do with ordinary numbers, like addition, subtraction, multiplication and division, all using standard algebraic rules.

The Mandelbrot set is constructed in the complex plane. We start by choosing a point on the complex plane (i.e. a complex number), which we label c. We introduce another complex number, z, which we initially set to zero. We then repeatedely apply the following simple rule: we let the new value of z be the square of the old value of z, plus c. After each iteration of applying this rule we calculate the magnitude of z, which is simply the distance from the zero-point on the complex plane (i.e. 0+0i) to z. We then compare the magnitude of z to some arbitrary threshold. We say that the depth of the complex number c into the set is how many times the rule can be applied before z exceeds the threshold. Finally we assign a colour or shade of gray to the point c on the complex plane according to its depth. This procedure is repeated for every point on the complex plane. So in summary, a picture of the Mandelbrot set is just a coloured representation of the depth of points on the complex plane into the set.

Despite being described by such an incredibly simple rule, the Mandelbrot set in incredibly complicated. In fact it is infinitely complicated. If we zoom in on a region of the Mandelbrot set things don’t begin to flatten out or become regular. Instead the phenomenal beauty and complexity continues. We notice many familiar patterns emerging, most notably the ‘bug’ silhouette appearing over and over again. Nonetheless, the pattern is not repetitive. We can zoom in, quite literally forever, and never see any repetition or reduction in complexity. This is a generic property of fractals, of which the Mandelbrot set is one. It may seem that the Mandelbrot set, while very attractive, is fairly useless, which is most likely the case. However, this is not the case for fractals in general, which can be used to describe many real world phenomena. For example, geographic formations, growth of biological organisms such as the patterns on sea shells and fern leaves, and even the dynamics of financial markets can all be described by fractals.

Mandelbrot set

So what is it that I do? – Part 3: Linear Optics Quantum Computing

In my previous post I discussed quantum computing and why it has the potential to be more powerful than classical computing. Until now however I’ve only discussed quantum computing in a very abstract sense. I’ve talked about quantum gates and qubits without giving any clue as to how these things are made in practise. Clearly this is pivotal, since otherwise quantum computing would be nothing more than a cool theory without much of a future. Once again I’ll discuss the classical case as a starting point. In classical computers, as everyone is probably aware, bits are represented using electrical signals. For example, the presence of an electrical current could represent a logical ‘1’ and its absense a logical ‘0’. Using electrical signals has the advantage that we can use electronic circuits consisting of transistors to implement gates. So what about the quantum case? It turns out that in fact there are countless ways in which qubits can be represented and quantum gates constructed.

In my research I focus on one idea in particular, referred to as linear optics quantum computing, or LOQC. In LOQC we use photons, or particles of light, to represent qubits. Most of us will be familiar with the idea of polarization of light from high school experiments. For those not familar with polarization, it can simply be thought of as the orientation of light, which must always be perpendicular to the direction in which the light travels. As a crude analogy, image a plane flying along which then rolls to one side. This would be changing the plane’s polarization. Using the polarization of a photon we can very easily represent a qubit. We simply define that if a photon is polarized along say a horizontal axis that it represents a ‘0’, and a vertical axis a logical ‘1’. Naturally, the all important superpositions are possible. For example if a photon were polarized at 45 degrees to the horizontal, this would be an equal superposition of horizontal and vertical, or ‘0’ and ‘1’.

So we have our qubits. What about constructing quantum gates? In my previous post I talked about the CNOT gate, a 2-qubit quantum gate. The picture below is a schematic for an implementation of this gate using LOQC. It isn’t terribly important, or instructive, to understand exactly how it works. I really only want to draw your attention to two features. Firstly the black lines. These represent paths which the light (i.e. photons) take. Secondly, the horizontal bars. These are beamsplitters, and are nothing other than partially silvered mirrors which split a light beam into multiple weaker beams. The beamsplitters serve to interfere the photons in the circuit in a cleverly designed way such that the device implements a CNOT gate.

As this point in time, things are probably looking very bright for LOQC. Unfortunately however there are some pretty hideous problems with the gate I’ve just described. Most notably, the gate is non-determinisitc, which simply means that it has some definite probability of failing. It’s probably immediately obvious just what a problem this presents. If we only need to implement a single gate, and we have one which succeeds with say probability P, there are no worries. We simply repeat the gate over and over until it works. In practise however we’ll be wanting to make complicated circuits consisting of perhaps thousands of these gates. In this case the probability of the circuit working is P to the power the number of gates in the circuit. In other words, we could be waiting a million years for the circuit to work. Fortunately, there are some extremely clever proposals on how this problem can be circumvented. Unfortunately, they are also quite complicated and experimentally very difficult.

In conclusion, we know that in theory LOQC can work. In fact, the gate shown below has been experimentally demonstrated at the University of Queensland Quantum Technology Lab. In practise however there are many technological obstacles to overcome before any useful quantum computing algorithms can be implemented using LOQC.

So what is it that I do? – Part 2: Quantum Computing

In my previous post I presented a few ideas from quantum mechanics. The next question to address is “how does this relate to computing?“. Let’s start by familiarizing ourselves with classical computing. A classical computer is just a normal, everyday style computer, like the one you’re looking at right now. Anyone who’s done any computer science or electronics will know that a computer employs binary logic, i.e. a language consisting of ‘0’ and ‘1’, and is constructed from gates. A gate is the most fundamental building block in a classical computer and performs very simple logical operations. For example, there is the so called AND gate, which has two binary inputs and one binary output. The output of the gate is ‘1’ only if both inputs are ‘1’, otherwise the output is ‘0’. Similarly, there is the OR gate, which outputs a ‘1’ if either or both inputs are a ‘1’. There is also a single bit gate, called the NOT gate, which simply inverts whatever the input state was. In fact, these three gates form a so called universal set of gates for classical computing, which means that any logical operation can be implemented using them. The PC on your desk is constructed from nothing other than a massive number of elementary gates like these, which jointly perform the much more complicated logical operations your computer is capable of.

So, that’s classical computers in a nutshell. What about quantum computers? Much in the same way that a classical computer uses bits, a quantum computer employs so called qubits, or quantum bits. If you read my last post you’ll recall that in quantum mechanics, superpositions of different states are possible. The idea of superpositions applies to qubits as well, which means that, in addition to taking on values of ‘0’ and ‘1’, a qubit can take on superpositions of ‘0’ and ‘1’. In other words, a qubit can simultaneously be in some combination of these two states. If we have multiple qubits we can have superpositions of multiple qubit states. For example, with two qubits, we could have a superpostion of all the states ’00’, ’01’, ’10’ and ’11’. In a classical computer we could only represent any one of these states at a given time.

So what do we do with our wonderful qubits? In quantum computing we have something analogous to the classical gate. You guessed it, the quantum gate. Much like a classical gate, a quantum gate operates on one or more input qubits to produce output qubits. What makes quantum gates so powerful however is that if we input a superposition state we get more than just a binary sequence at the output. Instead we get a superposition state, where each element of the superposition is the action of the gate on the corresponding element of the input superposition. In short, the gate simultaneously operates on all components of a superposition state. This idea is probably best illustrated by example. We’ll consider the so called controlled-NOT (or CNOT) gate. This gate has two input and two output qubits and performs the following logical operation: if the first qubit is ‘1’ the second qubit has its value flipped, otherwise it is left alone. So, if we input the state ’00’ we get an output state of ’00’, and if we input the state ’10’ we get ’11’. Now suppose that we input a superposition of these two states. In this case we get an output state containing a superposition of ’00’ and ’11’. Pretty cool, huh?

I’m sure it won’t take much to convince you that quantum circuits have the potential to be much more powerful than their classical counterparts, since we can input superposition states into a circuit and get an output state which contains a superposition of all the results, all in one go. Unfortunately there are some rather nasty caveats associated with qubits. Most notably, it actually isn’t posibble to directly measure a superpostion state and determine what all the components in the superposition are. So even though we might have some wonderful quantum circuit which gives us a superposition of answers to some problem, we can’t extract what the solutions are. This is clearly pretty nasty and limits the applications for quantum computers. Nonetheless, in some cases there are clever ways around this problem, which I won’t go into here. Rest assured however that despite the aforementioned problem, quantum computers still have the potential to solve problems which could otherwise never be solved on classical computers due to their computational complexity.

So what is it that I do? – Part 1: Quantum Mechanics

My work in physics focusses on quantum computing, one of those hot topics that the media love to talk about, but nobody actually seems to know what it is. So providing a very layman’s explanation of quantum computing, that even I can understand, seems like a reasonable place to start. Over the next few posts I hope to provide something along the lines of Quantum Computing for Complete Twits, or perhaps even more rudimentary than that. I’ll start by trying to explain some of the basic ideas of quantum mechanics, the study of things which are very very small, which is necessary if we’re going to understand the idea behind quantum computing.

In the world around us (not the TV show) we’re all used to seeing objects with precisely defined states. By states I mean things like position, velocity, whatever. Any old property will do. As it turns out, when we delve into the world of quantum mechanics this is no longer the case. For example, if you try and look at things like individual atoms, their states are no longer very well defined. Instead things are sort of fuzzy. If you try and measure the position of an atom, you just can’t get a precise answer as to where it is. This doesn’t present much of conceptual problem to us, right?. “Surely the fuzziness is just experimental error and noise?” we might say. Strangely this isn’t the case at all. In fact the fuzziness is an instrinsic property of physics. That is to say, things actually are fuzzy, and no matter how high-tech our measurement devices are, we will never be able to overcome it. From here, I’m afraid, things only begin to become more strange. The idea of states not being well defined, or fuzzy, leads on to the next bizarre phenomena of quantum mechanics, that of superpositions.

We’ve all grown up used to things being in one place or another. In quantum mechanics the rules are a bit looser than this and in fact things can be in one place and another, called a superposition of the different states. This should be taken quite literally as meaning that something can simultaneously be in multiple places at once. How do we know this? Hopefully we all remember our high-school optics experiments where we shine a laser beam through a double slit and observe interference effects on the other side. Well we can do this with single photons too. The problem is that the photon is a fundamental, indivisible unit, unlike a bright beam of laser light. When a single photon hits a double slit, it doesn’t become two photons. There’s still only one photon. Nonetheless, we can oberserve exactly the same kind of interference effect at the back end of the double slit as we do with laser light. This seems like a bit of a contradiction. If the individual photons are indivisble and always go one way or another at the double slit then cleary there would be no interference pattern. We’d just see two bright spots, corresponding to the two slits. However we do see interference patterns. The explanation is that the single photon enters a superposition of the two possible routes and self-interference occurs at the other side of the double slit. In other words, even though there is only one photon, it simultaneously takes both routes. Strange but true!