Mandelbrot sets

Today I was reading an interview on New Scientist with Benoit Mandelbrot, a famous mathematician and founder of a branch of mathematics known as fractal geometry. Almost everybody will recognize a picture of the so-called Mandelbrot set (below), but probably very few are familiar with exactly what it is or how it comes about. In this post I hope to provide a very layman's description of how the Mandelbrot set is constructed. First, however, I must explain what a complex number is, something which some will be familiar with, but others not.

The normal numbers that we deal with every day are the so-called real numbers and can be represented on the number line, which ranges from negative infinity in one direction to positive infinity in the other. Real numbers suffice for most things that we'll ever want to do with numbers, however there are some limitations on what can be done with real numbers. In particular, it had long bothered mathematicians that there was no solution to the simple question 'what is the square root of -1?'. In fact, using real numbers there simply is no answer to this question (let me know if you think of one). In recognition that there is no numerical answer to this question, mathematicians decided to simply give it a label. And so the square root of -1 was labeled i. What's the point you ask? Well the point is that by introducing this label we can proceed to solve problems which we otherwise would have thrown out the window. A complex number is simply a number which contains a real component, which we'll call a, and some multiple of i, which we'll call b, often referred to as the imaginary component. Technically, we can say that a complex number can be expressed in the form a+ib, where a and b are both real numbers. Unlike the real numbers, which can be represented on the number line, a complex number is 2-dimensional and must instead be expressed on a plane, called the complex plane, where the real component is one axis of the plane and the imaginary component the other. We can do with complex numbers all the sorts of things that we can do with ordinary numbers, like addition, subtraction, multiplication and division, all using standard algebraic rules.

The Mandelbrot set is constructed in the complex plane. We start by choosing a point on the complex plane (i.e. a complex number), which we label c. We introduce another complex number, z, which we initially set to zero. We then repeatedely apply the following simple rule: we let the new value of z be the square of the old value of z, plus c. After each iteration of applying this rule we calculate the magnitude of z, which is simply the distance from the zero-point on the complex plane (i.e. 0+0i) to z. We then compare the magnitude of z to some arbitrary threshold. We say that the depth of the complex number c into the set is how many times the rule can be applied before z exceeds the threshold. Finally we assign a colour or shade of gray to the point c on the complex plane according to its depth. This procedure is repeated for every point on the complex plane. So in summary, a picture of the Mandelbrot set is just a coloured representation of the depth of points on the complex plane into the set.

Despite being described by such an incredibly simple rule, the Mandelbrot set in incredibly complicated. In fact it is infinitely complicated. If we zoom in on a region of the Mandelbrot set things don't begin to flatten out or become regular. Instead the phenomenal beauty and complexity continues. We notice many familiar patterns emerging, most notably the 'bug' silhouette appearing over and over again. Nonetheless, the pattern is not repetitive. We can zoom in, quite literally forever, and never see any repetition or reduction in complexity. This is a generic property of fractals, of which the Mandelbrot set is one. It may seem that the Mandelbrot set, while very attractive, is fairly useless, which is most likely the case. However, this is not the case for fractals in general, which can be used to describe many real world phenomena. For example, geographic formations, growth of biological organisms such as the patterns on sea shells and fern leaves, and even the dynamics of financial markets can all be described by fractals.

Mandelbrot set

So what is it that I do? – Part 3: Linear Optics Quantum Computing

In my previous post I discussed quantum computing and why it has the potential to be more powerful than classical computing. Until now however I've only discussed quantum computing in a very abstract sense. I've talked about quantum gates and qubits without giving any clue as to how these things are made in practise. Clearly this is pivotal, since otherwise quantum computing would be nothing more than a cool theory without much of a future. Once again I'll discuss the classical case as a starting point. In classical computers, as everyone is probably aware, bits are represented using electrical signals. For example, the presence of an electrical current could represent a logical '1' and its absense a logical '0'. Using electrical signals has the advantage that we can use electronic circuits consisting of transistors to implement gates. So what about the quantum case? It turns out that in fact there are countless ways in which qubits can be represented and quantum gates constructed.

In my research I focus on one idea in particular, referred to as linear optics quantum computing, or LOQC. In LOQC we use photons, or particles of light, to represent qubits. Most of us will be familiar with the idea of polarization of light from high school experiments. For those not familar with polarization, it can simply be thought of as the orientation of light, which must always be perpendicular to the direction in which the light travels. As a crude analogy, image a plane flying along which then rolls to one side. This would be changing the plane's polarization. Using the polarization of a photon we can very easily represent a qubit. We simply define that if a photon is polarized along say a horizontal axis that it represents a '0', and a vertical axis a logical '1'. Naturally, the all important superpositions are possible. For example if a photon were polarized at 45 degrees to the horizontal, this would be an equal superposition of horizontal and vertical, or '0' and '1'.

So we have our qubits. What about constructing quantum gates? In my previous post I talked about the CNOT gate, a 2-qubit quantum gate. The picture below is a schematic for an implementation of this gate using LOQC. It isn't terribly important, or instructive, to understand exactly how it works. I really only want to draw your attention to two features. Firstly the black lines. These represent paths which the light (i.e. photons) take. Secondly, the horizontal bars. These are beamsplitters, and are nothing other than partially silvered mirrors which split a light beam into multiple weaker beams. The beamsplitters serve to interfere the photons in the circuit in a cleverly designed way such that the device implements a CNOT gate.

As this point in time, things are probably looking very bright for LOQC. Unfortunately however there are some pretty hideous problems with the gate I've just described. Most notably, the gate is non-determinisitc, which simply means that it has some definite probability of failing. It's probably immediately obvious just what a problem this presents. If we only need to implement a single gate, and we have one which succeeds with say probability P, there are no worries. We simply repeat the gate over and over until it works. In practise however we'll be wanting to make complicated circuits consisting of perhaps thousands of these gates. In this case the probability of the circuit working is P to the power the number of gates in the circuit. In other words, we could be waiting a million years for the circuit to work. Fortunately, there are some extremely clever proposals on how this problem can be circumvented. Unfortunately, they are also quite complicated and experimentally very difficult.

In conclusion, we know that in theory LOQC can work. In fact, the gate shown below has been experimentally demonstrated at the University of Queensland Quantum Technology Lab. In practise however there are many technological obstacles to overcome before any useful quantum computing algorithms can be implemented using LOQC.

So what is it that I do? – Part 2: Quantum Computing

In my previous post I presented a few ideas from quantum mechanics. The next question to address is "how does this relate to computing?". Let's start by familiarizing ourselves with classical computing. A classical computer is just a normal, everyday style computer, like the one you're looking at right now. Anyone who's done any computer science or electronics will know that a computer employs binary logic, i.e. a language consisting of '0' and '1', and is constructed from gates. A gate is the most fundamental building block in a classical computer and performs very simple logical operations. For example, there is the so called AND gate, which has two binary inputs and one binary output. The output of the gate is '1' only if both inputs are '1', otherwise the output is '0'. Similarly, there is the OR gate, which outputs a '1' if either or both inputs are a '1'. There is also a single bit gate, called the NOT gate, which simply inverts whatever the input state was. In fact, these three gates form a so called universal set of gates for classical computing, which means that any logical operation can be implemented using them. The PC on your desk is constructed from nothing other than a massive number of elementary gates like these, which jointly perform the much more complicated logical operations your computer is capable of.

So, that's classical computers in a nutshell. What about quantum computers? Much in the same way that a classical computer uses bits, a quantum computer employs so called qubits, or quantum bits. If you read my last post you'll recall that in quantum mechanics, superpositions of different states are possible. The idea of superpositions applies to qubits as well, which means that, in addition to taking on values of '0' and '1', a qubit can take on superpositions of '0' and '1'. In other words, a qubit can simultaneously be in some combination of these two states. If we have multiple qubits we can have superpositions of multiple qubit states. For example, with two qubits, we could have a superpostion of all the states '00', '01', '10' and '11'. In a classical computer we could only represent any one of these states at a given time.

So what do we do with our wonderful qubits? In quantum computing we have something analogous to the classical gate. You guessed it, the quantum gate. Much like a classical gate, a quantum gate operates on one or more input qubits to produce output qubits. What makes quantum gates so powerful however is that if we input a superposition state we get more than just a binary sequence at the output. Instead we get a superposition state, where each element of the superposition is the action of the gate on the corresponding element of the input superposition. In short, the gate simultaneously operates on all components of a superposition state. This idea is probably best illustrated by example. We'll consider the so called controlled-NOT (or CNOT) gate. This gate has two input and two output qubits and performs the following logical operation: if the first qubit is '1' the second qubit has its value flipped, otherwise it is left alone. So, if we input the state '00' we get an output state of '00', and if we input the state '10' we get '11'. Now suppose that we input a superposition of these two states. In this case we get an output state containing a superposition of '00' and '11'. Pretty cool, huh?

I'm sure it won't take much to convince you that quantum circuits have the potential to be much more powerful than their classical counterparts, since we can input superposition states into a circuit and get an output state which contains a superposition of all the results, all in one go. Unfortunately there are some rather nasty caveats associated with qubits. Most notably, it actually isn't posibble to directly measure a superpostion state and determine what all the components in the superposition are. So even though we might have some wonderful quantum circuit which gives us a superposition of answers to some problem, we can't extract what the solutions are. This is clearly pretty nasty and limits the applications for quantum computers. Nonetheless, in some cases there are clever ways around this problem, which I won't go into here. Rest assured however that despite the aforementioned problem, quantum computers still have the potential to solve problems which could otherwise never be solved on classical computers due to their computational complexity.

So what is it that I do? – Part 1: Quantum Mechanics

My work in physics focusses on quantum computing, one of those hot topics that the media love to talk about, but nobody actually seems to know what it is. So providing a very layman's explanation of quantum computing, that even I can understand, seems like a reasonable place to start. Over the next few posts I hope to provide something along the lines of Quantum Computing for Complete Twits, or perhaps even more rudimentary than that. I'll start by trying to explain some of the basic ideas of quantum mechanics, the study of things which are very very small, which is necessary if we're going to understand the idea behind quantum computing.

In the world around us (not the TV show) we're all used to seeing objects with precisely defined states. By states I mean things like position, velocity, whatever. Any old property will do. As it turns out, when we delve into the world of quantum mechanics this is no longer the case. For example, if you try and look at things like individual atoms, their states are no longer very well defined. Instead things are sort of fuzzy. If you try and measure the position of an atom, you just can't get a precise answer as to where it is. This doesn't present much of conceptual problem to us, right?. "Surely the fuzziness is just experimental error and noise?" we might say. Strangely this isn't the case at all. In fact the fuzziness is an instrinsic property of physics. That is to say, things actually are fuzzy, and no matter how high-tech our measurement devices are, we will never be able to overcome it. From here, I'm afraid, things only begin to become more strange. The idea of states not being well defined, or fuzzy, leads on to the next bizarre phenomena of quantum mechanics, that of superpositions.

We've all grown up used to things being in one place or another. In quantum mechanics the rules are a bit looser than this and in fact things can be in one place and another, called a superposition of the different states. This should be taken quite literally as meaning that something can simultaneously be in multiple places at once. How do we know this? Hopefully we all remember our high-school optics experiments where we shine a laser beam through a double slit and observe interference effects on the other side. Well we can do this with single photons too. The problem is that the photon is a fundamental, indivisible unit, unlike a bright beam of laser light. When a single photon hits a double slit, it doesn't become two photons. There's still only one photon. Nonetheless, we can oberserve exactly the same kind of interference effect at the back end of the double slit as we do with laser light. This seems like a bit of a contradiction. If the individual photons are indivisble and always go one way or another at the double slit then cleary there would be no interference pattern. We'd just see two bright spots, corresponding to the two slits. However we do see interference patterns. The explanation is that the single photon enters a superposition of the two possible routes and self-interference occurs at the other side of the double slit. In other words, even though there is only one photon, it simultaneously takes both routes. Strange but true!

The Neo-Modern Humanist