Googling it up

Many people these days use the well known Google search engine on a daily basis for finding things on the world-wide-web. In recent months, however, Google has extended their search functionality to include much more than just old-fashioned web-searches. In particular, Google has just launched Google Maps, which gives the user an interactive map of all of North America. Satellite imagery can be enabled (see picture below of lower Manhattan Island and Ground Zero), which gives an end result similar to NASA‘s WorldWind program, which I discussed in a previous post. In addition to imagery, the user can ask for directions from one place to another and be given a complete itinerary and map for the journey.

Another extremely useful search facility, tailored for academics, is Google Scholar, which allows for searching through journals, pre-print archives and many other sources.

Finally, something which I haven’t had the opportunity to try for myself, since I’m a mobile phone-less cave-dweller, is Google SMS. This allows for the user to perform searches for businesses, weather forecasts, movie times, product prices, and a multitude of other things using their mobile phones. One particularly novel use which I heard one of the Google co-founders discuss in a recent television interview was the ability to search for product prices within specified geographical constraints. For example, you could dial up the price of a product and be given a list of all the locations within half a mile where that product is sold and what the prices at the respective outlets are. Innovative uses of mobile search technology such as this could potentially have the power, in the long term, to completely revolutionize consumer behaviour and inject a whole new level of competitiveness into markets.

These new search technologies, in my opinion, are perhaps just tip of the iceberg of what is to come. There are countless potential uses for search technology, particular mobile search technology, and it will be very exciting to see what developments arise in the future.

The case for a national sales tax

In recent months there has been much talk in the United States of the possibility of introducing a National Sales Tax (NST) to replace the existing income based tax system (stories on Fox News, CNN Money). The proposal has been steadily gathering support in Congress and has the backing of Federal Reserve Chairman, Alan Greenspan (stories on Chicago Tribune, Fox News).

The proposal for an National Sales Tax would replace all forms of taxation, including income tax, capital gains tax, corporate taxes, and other duties and levies with a single sales tax, or Goods and Services Tax (GST) for the Australians amongst us.

As a long-time advocate of a completely consumption based tax system here in Australia, I’ll present my own take on this issue and why I think it would be an extremely beneficial policy. The main points in favour of a NST, in my opinion, are as follows:

  • Increased investment and employment
    The present income based tax system has the following effects:

    • Investment is strongly discouraged through Capital Gains Tax (in fact investment is doubly taxed: first in the form of income tax and then again on the capital gains earned on the investment).
    • Employment and promotion are discouraged through Income Tax.
    • Entrepreneurship is discouraged through Income Tax and Corporate Tax.
    • Consumption is encouraged, since it is, on average, taxed at a much lower rate than income.

    A consumption based taxation system would have the converse effect. Namely, people would be encouraged to seek employment, to seek promotion while in employment, and to invest their money rather than spend it. This would have several consequences:

    • Employment levels would increase. As a result of the elimination of disincentive to work we could reasonably expect unemployment levels to decrease. This would be further exacerbated by the fact that with increased investment and a decreased tax burden, companies would have more capital available with with to take on employees.
    • Interest rates would be lower. With a higher percentage of peoples’ disposable income going into savings, banks would have a greatly increased capacity to lend money, resulting in lower effective interest rates. An important point is that these decreases in interest rates would not have an inflationary effect, since they come about as a direct result of people’s consumption abstinence.
    • Economic growth would increase. As a result of massively increased investment and decreased growth disincentive, economic productivity (i.e. GDP) would increase.
  • Elimination of bureaucracy and complexity
    The present tax system necessitates a massive bureaucracy to support its processing and collection. In the United States this costs almost $11b annually. Under a simplified consumption based tax system this bureaucracy and its associated maintenance costs would be slashed, representing a significant saving to the tax-payer.

    In addition to bureaucratic simplification, a NST would represent an enormous simplification to the individual. I’m sure anyone who has filled in a tax return, whether it be here in Australia, the US, or anywhere else, would concur on that note.

  • Less tax evasion
    A NST would eliminate tax evasion, which is rife under the present system. People often argue against sales taxes on the basis that they harm the poor and favour the rich. I would argue that in fact quite the opposite is true. It in an undeniable fact that the bulk of the income tax burden falls on the shoulders of the middle-class, not the extremely rich. In fact, the very wealthy typically have the means by which to avoid income tax altogether through a variety of mechanisms which are not accessible to the working class. Under a consumption based tax system this could, to a large extend, be mitigated, and everyone would pay tax as a proportion of how much they consume.
  • Ideological reasons
    From an ideological point of view, I’m sure I’m not alone in being critical of how materialistic society has become. For this reason I believe a NST is a better alternative to the present system since it would encourage people away from living materialistic lifestyles, towards ones which place more emphasis on the importance of saving, investment and long-term financial planning.

Perhaps the most common criticism of consumption based tax systems is that they represent an increased burden on the poor. There are two rebuttals to this criticism:

  • Under the NST proposal a rebate on tax paid would be offered to those below the designated poverty threshold.
  • Unemployment levels would be reduced, resulting in many people at the lower end of the poverty scale being drawn out of poverty and into employment.
  • The effects of the NST would be partially offset by an increase in disposable income.

In summary, a National Sales Tax would be pro-growth, pro-employment and pro-investment, which is in stark contrast to the present system which seems to discourage all the things we should be encouraging and discouraging all the things we shouldn’t.


New paper: Frequency and temporal effects in linear optical quantum computing

My joint paper with Tim Ralph, entitled “Frequency and temporal effects in linear optical quantum computing” has just appeared in Physical Review A (e-print quant-ph/0411114).

In my previous post on linear optics quantum computing (LOQC) I discussed the notion of interfering photons, which is a fundamental requirement for LOQC. It turns out that for photons to interfere they must be indistinguishable. In quantum mechanics the term indistinguishable has a very strict interpretation: if we have two photons, there must be no way, in principle, for us to tell them apart. For example, suppose we had two identical photons arriving one after another. In this case, the different timing of the two photons allows us to know which is which. Thus, the photons are temporally distinguishable. Simlilarly, two photons might arrive simultaneously, but be spatially separated, in which case the photons are also distinguishable. Experimentally, making photons which are completely indistinguishable is extremely challenging and requires an enormous amount of precision, which is limited by technology. This problem of making indistinguishable photons is one of the most significant complications facing experimentalists.

In this paper we analysed the effect of photon distinguishability on the operation of LOQC circuits, in particular the controlled-NOT (CNOT) gate, which I described in an earlier post. From this analysis we were able to specify how distinguishable photons need to be for an experimental CNOT gate to work effectively. Understanding this is clearly very important from an experimental point of view, since it gives us an idea of how well we can reasonably expect our experimental gates to work.

The case for voluntary unionism

Today the government put the controversial “Higher Education Support Amendment (Abolition of Compulsory Up-front Student Union Fees) Bill 2005” before Federal Parliament, which would ban compulsory student unionism (CSU) throughout Australia. Over recent months this proposal has been subject to countless protests by activist student groups and much bad-press. Consequently, as a staunch supporter of voluntary student unionism (VSU), I’ll present the supporting case from my personal perspective.

From my point of view, there are four main supporting arguments in favour of VSU:

  • Freedom of association
    One of the fundamental tenets of Democracy is freedom of association. When a student is forced to belong to an organisation which they do not want to belong to, this is a fundamental violation of our personal freedoms. This criticism applies to any sort of forced affiliation, but particularly to student unions since they are not all-inclusive and not representative of the majority (refer to my next point).
  • Student unions are not representative
    Student unions is Australia typically pursue blatant political agendas, usually including socialism, feminism, environmentalism, drug-law reform and countless other leftist ideologies, typically in extreme form. While people are certainly more than entitled to any or all of these points of view, the fact is that they do not represent a majority perspective. I should clarify that I am not suggesting that student unions instead start pursuing right-wing political agendas. Rather, they should pursue strictly apolitical agendas and focus on providing services which are available and useful to all. So long as student unions promote any ideology over any other, they are not, and cannot be the all-inclusive, representative bodies they should. However, student unions are so universally plagued by the problem of partisanship, and the political affiliations so firmly entrenched, that most students, myself included, have no faith whatsoever they will change on their own accord.
  • Students should not have to pay for services they do not use
    At the University of Queensland, where I study, I pay approximately $300 per year in Student Union fees, and I should reasonably be able to expect something in return. Unfortunately, rather than using their massive budget to provide services of benefit to everyone, the money is funnelled into countless causes which are of no benefit to the vast majority of students. Following are a few examples of ways in which Student Union money is spent ‘representing’ students at UQ:

    • The Women’s Collective, a small group of activist feminists, receive approximately $150,000 per year in support from the Student Union. This money is spent on activities such as sending members to surf and meditation camps on the New South Wales North Coast. Needless to say, ‘services’ like this are an extravagance and of no benefit to the average student.
    • The Queer Collective, a group open to gays and lesbians on campus, receives similar levels of funding and spends it in similar ways.
    • The Food and Wine Appreciation Society, a small group of people who regularly eat-out at some of Brisbane’s most expensive and elite restaurants, at the expense of the University of Queensland Student Union.
    • The High Society, my personal favourite, is a group of marijuana smokers whose stated goal is to promote drug-law reform, but in fact organize for drug dealers from all across Brisbane to gather once a week on-campus to sell to UQ students. One of their trading sessions was recently stormed by the police, however they still meet regularly and still actively deal drugs to students on campus. Needless to say, this ‘essential service’ is subsidised by the Student Union.

    In the meantime, facilities like the refectories, which are used at some stage by the vast majority of students, are not subsidised and run at a large profit. In light of this, arguments in favour of CSU, which argue that it facilitates essential student services are not credible.

  • When membership is compulsory, student unions are run inefficiently
    In the absence of competitive forces, student unions have no incentive to run themselves efficiently. As an example, my purchases at the Union-owned refectories are approximately 20% more expensive than if I walk 10 minutes down the street to the nearest supermarket. The same applies to other Union-owned enterprises. Not only is the Union uncompetitive, but it actively seeks to stifle competition. In fact, the Union has regulations in place which forbid non-Union-owned enterprises from operating on Union premises, if they are in direct competition with a Union-owned enterprise.

In summary, the argument in favour of VSU is not one which ruthlessly opposes the existence of student services or student representation, as much of the media, and certainly the CSU supporters, have been making out. In fact, the argument for VSU is that it is in the interests of openness, transparency, competitiveness and personal liberty, to allow every individual the right to choose for themselves what is best for themselves.

New paper: Non-deterministic approximation of photon number discriminating detectors using non-discriminating detectors

Quantum computing is one of those mysterious fields that most people don’t have any familiarity with at all. This problem is excaccerbated by the fact that the literature in this field is typically very technical and out of reach to non-specialists. In fact, even I, as a student in the field, find it extremely difficult to follow the literature. Consequently, I’ve decided that whenever I publish a new paper, I’ll complement it with a layman’s description of the paper in this blog, hopefully in a form pallateable to those who have never even heard of the word physics before.

Several months ago my first physics paper was accepted for publication in the Journal of Optics B., entitled “Non-deterministic approximation of photon number discriminating detectors using non-discriminating detectors” (e-print quant-ph/0411114). This is quite a mouthful I confess, however the idea behind the paper is very simple and I’ll now attempt to explain it in as non-technical a manner as possible.

In optical quantum computing (see my previous post for an introduction), my area of research, we use photons, or particles of light, to represent qubits. One of the most fundamental requirements to experimentally realize optical quantum gates is photo-detectors. A photo-detector is simply a device which tells you when a photon hits it. However, there are two types of photo-detectors: discriminating (or number-resolving) and non-discriminating. A discriminating photo-detector is one which is able to tell you how many photons have hit it, whereas a non-discriminating detector can only tell you if one or more photons hit it, but not how many. In the long term, if we’re going to able to construct optical quantum computers, it turns out that we’ll need the former discriminating variety. However, building discriminating photo-detectors is very technologically challenging and presently we only have reliable non-discriminating ones. In this paper I describe how non-discriminating detectors can be used to approximate discriminating detectors. The proposal is non-deterministic, which simply means that the device doesn’t always work, but when it does, you can be very confident that it has worked correctly. Because the proposal in non-deterministic, it’s application is quite limited. However, there are still many experiments for which the proposal may be applicable.

My proposal describes a class of detectors which do the following. We have some unknown number of photons n, hitting our detector, and we want to know if the number of incident photons is m. If n=m our detector gives a response, otherwise it does not.

I’ll begin by explaining the simplest case, an m=1 detector. That is, a detector which tells us if there is exactly one incident photon. There are only two basic components we need to do this: a beamsplitter and two non-discriminating detectors. A beamsplitter is simply a device which, when a photon hits it, has some probability P, of going one way, and some other probability 1-P, of going the other. To construct our m=1 detector we will simply direct the incoming beam onto a very low reflectivity beamsplitter (i.e. very small P). At each of the beamsplitter outputs we place a non-discriminating detector. Now imagine that n=1, i.e. a single photon is incident on the beamsplitter. The photon has probability P of hitting the first detector, and probability 1-P of hitting the second detector. If we have two incident photons (i.e n=2) then the probability that both photons reach the first detector is P squared, and the rest of the time at least one of the photons will reach the second detector. In general, for an arbitary n-photon input state, the probability that all photons reach the first detector drops of exponentially as P to the power of n. Recall that P is chosen to be very small. Therefore, P squared is much smaller than P. This realization forms the basis of the proposal. We apply a so-called post-selection procedure. Namely, if the first photo-detector triggers, but the second does not, we can say with high probability that exactly one photon was incident on the beamsplitter, since the probability that multiple photons are all directed to the first detector is extremely low. If the second detector does trigger, we ignore the result as a failure. This is where the non-determinism comes in. So we see that using a low-reflectivity beamsplitter and two non-discriminating detectors we can construct a device which fails most of the time, but when it succeeds, tells us that we almost certainly had exactly one photon at the input.

The idea I just described for an m=1 detector can be generalized to construct arbitrary m-photon discriminating detectors, by using a chain of m beamsplitters instead of just one. In principle the idea works very well, however there is one experimental challenge in particular which limits its effectiveness. Specifically, experimental photo-detectors are inefficient, which means they don’t always click when a photon hits them. Let’s consider what this means in the context of the m=1 detector I described earlier. We said that when the first detector clicks, but the second does not, we can be fairly sure that we had exactly one photon in our input. Suppose, however, that the detectors are ineffecicient. This means that a photon might hit the second detector, but not trigger it. This means that we may have in fact had two incident photons, but we simply missed one of them. Clearly this makes the results erroneous. This problem of having inefficient detectors is actually quite signficant since present-day detectors are really quite inefficient. Nonetheless, with the future in mind, when highly efficienty detectors may be available, the proposal may find a use.

Throat singing adventures

About two years ago I became interested in ‘throat singing’, an unusual form of singing, traditionally perfomed in Tuva, Tibet, Mongolia and several other places. Throat singing differs from conventional singing in that the singer controls the overtones in the voice rather than the fundamental, which is done by varying the shape of the mouth and position of the tongue. The voice box itself is only ever producing a single sound, a low pitch drone. The cavity of the mouth then acts as a bandpass filter, which is selectively tuned to different frequencies emanating from the voice box. In order for this to work effectively the voice box must produce a rich spectal profile, i.e. have a significant component of overtones, otherwise there won’t be anything to tune into. This is achieved by droning with a very constricted throat, which makes a gruff, and therefore spectrally rich, tone.

Naturally, I was curious to try it for myself, so I tracked down some instructions and started practising. Although experienced throat singers can produce incredible harmonics using just their mouth, I have found that the effects can be enhanced enormously by practising in the right places. In particular, bathrooms, concrete stairwells, underground carparks and caves have particularly good resonance, and without much practise an amateur like myself can soon have an entire room ringing and buzzing using just their voice. It’s a very exhilarating experience. Even within a particular environment, like a bathroom, different locations can have completely different resonant characteristics. For example, I’ve found that under the shower I get particularly good resosance when I stand directly over the drain pipe, facing down. At the right pitches this sets up a standing wave in the pipe, much like inside a flute or other woodwind instrument. Also very effective is facing into the corner of the room.

The moral of the story is that if you’re ever walking down the stairs and the whole thing starts sounding like someone rang a churchbell, don’t be too concerned, it’s probalbly just some innocent throat singer wanna-be, like myself, trying to squeeze in a bit of practise.

If you’re interested in hearing what throat singing sounds like, there are heaps of free MP3’s available for download. Also, Scientific American have an interesting article on Tuvan throat singing.

Mountaineering in New Zealand

I just returned from my long anticipated mountaineering trip to New Zealand, where I spent four weeks tramping and climbing in the Mt. Cook area. I initally took a Technical Mountaineering Course (TMC) with Alpine Guides, an excellent 10 day course on mountaineering technique, which included nights spent in snow caves, falling into crevasses (not intentionally I might add), climbing peaks and putting on several kilos from eating so much. If you’re interested in getting into the highly addictive sport of mountaineering, I can highly recommend this course. After the TMC I hired a guide for a week to attempt New Zealands highest peak, Mt. Cook (3754m). Unfortunately the expedition was unsuccessful due to unsafe snow conditions. By moonlight and headtorch we climbed about half-way up the mountain, via the infamous Linda Glacier, only to find ourselves on extremely high-risk avalanche terrain, from where we could not safely proceed. Bad luck I know, but that’s all part and parcel of mountaineering. We also spent a couple of days rock-climbing in the Lake Wanaka area, which was also great fun. As much fun as mountaineering is, it’s about as hard on the pocket as recreation can get. So before you take it up, ask yourself if you’re really willing to be perpetually broke. Personally, I wonder whether I should have taken up knitting instead. I know it would have made my mother happier.
Ice climbing

More on Moore’s Law

Most people will have heard of Moore’s Law, that the number of transistors in microprocessors doubles every two years. For example, in 1970, Intel’s original microprocessor, the 4004, had a few thousand transistors. Today, state of the art processors have on the order of half a billion transistors. While these statistics are certainly very interesting, there are some other laws regarding the development of semiconductor technology that most people are probably not so familiar with.

The semiconductor fabrication facilities where our Pentiums, Athlons and PowerPC’s are maufactured are, as you can probably imagine, not especially cheap. In fact, the cost of fabrication plants is increasing exponentially, much like Moore’s Law for the number of transistors on a chip. In 1970 it cost about $6 million to build a state of the art fabrication plant, today it costs a few billion, and by 2010 it is estimated to be on the order of $18 billion. That’s a lot of money for just one fabrication plant. Even today, the costs are so extreme that very few companies have the capital to build such plants. I won’t try and predict what implications this will have for the future of semiconductor technology, but it’s certainly an amazing, if not alarming, trend.

Manufacturing transitors, however, is just one half of the process. The other half is actually designing the circuits that go onto chips such that all of those hundreds of millions of transistors do something useful. To do this, today’s digital systems designers employ very high-tech software which automates most of the development cycle. The capacity of these tools to design large integrated circuits is also increasing exponentially. By this I mean the number of transistors which they can design into a circuit. Despite growing exponentially, however, the capacity of software tools is growing at a smaller exponential rate than the number of transistors that fit onto a chip. In other words, software design tools are not keeping up with what the fabrication plants are capable of. What this means is that although we have an enormous capacity for squeezing transistors onto chips, we keep falling further and further behind in trying to design useful circuits that fully make use of so many transistors. Fortunately that’s not the end of the story. Instead of putting transistors to waste, systems designers are overcoming this problem by changing their design paradigms. For example, we are beginning to see the emergence of multi-core microprocessors, which means that the designer puts multiple smaller microprocessors into a ‘chip’ rather than designing a single larger one. By doing this, the designer isn’t as limited by software and can still make full use of the available transistors. Trends like this are likely to represent the future of microprocessors and it probably won’t be long before all of our PC’s have many processor cores under the hood.

Sources: Intel Corp., IC Knowledge, seminar by Prof. Jan Rabaey (2003)

Quantum crypto-anarchist