Several days ago NZ PM Jacinda Ardern released her now-viral 2-minute policy achievement challenge.
I have prepared a response on behalf of Australia.
Several days ago NZ PM Jacinda Ardern released her now-viral 2-minute policy achievement challenge.
I have prepared a response on behalf of Australia.
With my PhD student Madhav Krishnan Vijayan, and old PhD colleague Austin Lund.
Full paper available at https://arxiv.org/abs/1910.03093
Error-detection and correction are necessary prerequisites for any scalable quantum computing architecture. Given the inevitability of unwanted physical noise in quantum systems and the propensity for errors to spread as computations proceed, computational outcomes can become substantially corrupted. This observation applies regardless of the choice of physical implementation. In the context of photonic quantum information processing, there has recently been much interest in passive linear optics quantum computing, which includes boson-sampling, as this model eliminates the highly-challenging requirements for feed-forward via fast, active control. That is, these systems are passive by definition. In usual scenarios, error detection and correction techniques are inherently active, making them incompatible with this model, arousing suspicion that physical error processes may be an insurmountable obstacle. Here we explore a photonic error-detection technique, based on W-state encoding of photonic qubits, which is entirely passive, based on post-selection, and compatible with these near-term photonic architectures of interest. We show that this W-state redundant encoding techniques enables the suppression of dephasing noise on photonic qubits via simple fan-out style operations, implemented by optical Fourier transform networks, which can be readily realised today. The protocol effectively maps dephasing noise into heralding failures, with zero failure probability in the ideal no-noise limit.
Meet Peter Rohde, an Australian Research Council Future Fellow in the Centre for Quantum Software & Information at the University of Technology, Sydney. His theoretical proposals have inspired several world-leading experimental efforts in optical quantum information processing
As a collaborator in China’s world-first quantum satellite program, he aided the design of quantum protocols for space-based demonstration. Rohde has worked at highly acclaimed institutes such as the University of Oxford and Institute for Molecular Biosciences, with over 60 publications and 1,500+ citations in quantum optics, quantum information theory, ecology, and politics.
Learn more about the world of quantum computing through the eyes of Peter Rohde. Grab your tiks.
Given the rate of information flow in the social media generation, and the ability for information to go internationally viral in a matter of minutes — which only requires thoughtless button-clicking within a few degrees of separation — it’s undeniable that the propagation of Fake News poses a major threat. Whether it be malicious electoral interference, or the perpetration of nonsensical views on medicine, leading to the reemergence of deadly, but entirely preventable diseases, the implications are undeniably catastrophic, already have been, and pose a major threat to humanity.
For this reason it’s understandable that people want to put an end to it. Of course we don’t want measles (or Trump). But how do we achieve this? Many politicians around the world are pressuring social media giants to filter content to eliminate fake news, while others are advocating legislation to force them to.
I oppose such approaches outright, and believe they pave the way for even greater thought manipulation. (Interpret the terminology fake news prevention, as being synonymous with terrorists, drugs and pedophiles, as per my last article).
What constitutes fake news, anyway? Given that even upon reading articles about the same event, as portrayed by two ideologically distinct, yet well-respected mainstream newspapers, the tilt can be so astronomical, with both sides criticising the other for bias and corruption, the notion of fakeness is hardly an objective one. When it comes to statements made by politicians it’s even more perverse.
There is no such thing as an unbiased media source, nor will any story we read have full access to all information, or the full background context, or be 100% verifiably correct. Essentially what propagates over the internet is a close approximation to white-noise. Applying the appropriate filter, you can extract any signal you want.
Any kind of enforcement of filtering or information suppression implies certain types of information being removed at the behest of those with the ability to do so. Those people are necessarily in positions of power and influence, and will pursue their own interests over the collective one. The ability to impose filtering, enables post-selection bias by mandate. In conjunction with the false sense of security that a filtering system creates, the outcome is even greater vulnerability to the self-reinforcement and confirmation biases we seek to avoid.
The implications of the ability for those in power to manipulate this to their advantage is obvious, and the basis upon which totalitarian societies are built. Already in Singapore there have been deep concerns surrounding this, where anti-fake news legislation requires organisations to,
“carry corrections or remove content the government considers to be false, with penalties for perpetrators including prison terms of up to 10 years or fines up to S$1m.”
The term “the government considers to be false” is an illuminating one.
Once a mandate for filtering is established, its application cannot be confined to what is ‘fake’, nor can we trust those making that determination to wield this extraordinary power. With such a mandate in place, the parameters defining its implementation will evolve with the political agenda, likely via regulation than via legislation — isolating it entirely from any democratic oversight or debate. Regardless who is at the helm, be sure that it will be used to undermine those who are not. History substantiates this — it is why we hold them to account, rather than blindly trust them to do what is right.
Instead of relying on those with vested interests to take on fake news, we must arm ourselves to do it in their absence. We must do so in a manner that is collective, transparent, decentralised, and robust against malign political interference (i.e all political interference).
By far the most powerful avenue towards combating fake news is for people being equipped with the skills to do so themselves. For this reason, the following should be taught to all, from the earliest possible age, including making them essential components of our education system:
Already Finland has reportedly had great success in pursuing precisely this approach at the school level, with similar discussions emerging in the UK and within the OECD. Finland’s approach (Nb: I don’t know the details of the curriculum), is foresighted and correct.
Sometimes our ability to spot fakeness at a glance is challenging, and even the most mindful social media users will routinely fall for things, making software tools indispensable to a robust process. Certainly, modern analytical techniques could be employed for this purpose to reveal the reliability of information sources, usually with a high degree of accuracy. When it comes to social media giants applying fake news filters, this is inevitably the route that will be taken. It can’t possibly be done by hand.
If the purpose of such software tools is to make us aware of misleading information, then its manipulation provides an even more powerful avenue for misleading us than the underlying information itself, based on the false sense of security, and our own subsequent subconscious loosening of internal filtering standards.
To illustrate this, the exisiting social media giants, Facebook and Twitter, are already routinely accused of implementing their anti-hate-speech policies in a highly inconsistent and asymmetric manner. Everyone will have their own views on this, but from my own observations I agree with this assessment. Note that selectively preventing hate speech from one side, whilst not doing so for the other, is an implicit endorsement of the latter, tantamount to direct political support. This type of political support — the ability to freely communicate, and simultaneous denial of one’s opponents to do so — is the single greatest political asset one can have. The ability to platform and de-platform entire organisations or ideologies is the single most politically powerful position one can hold — it’s no coincidence that the first step taken under the formation of any totalitarian state, is centralised control of the media.
This implies that any tools we rely on for this purpose must be extremely open, transparent, understandable, and robust against intentional manipulation. In the same way that you would not employ a proprietary cryptographic algorithm for encrypting sensitive data, with no knowledge of its internal functioning, the same standard of trust must be applied when interpreting the reliability of information, yet alone outright filtering.
Simultaneously, these tools must be allowed to evolve and compete. If they are written behind closed doors by governments or by corporations, none of these criteria will be met. The tools cannot fall under any kind of political control, and must be decentralised and independent.
Tools based on community evaluation and consensus should be treated with caution, given their vulnerability to self-reinforcement via positive feedback loops of their own — a new echo-chamber. Indeed, this vulnerability is precisely the one that fake news exploits to go viral in the first place.
Identifying unreliable information sources is something that modern machine learning techniques are extremely well-suited to, and if implemented properly, would likely be our most useful tool in fact-checking and fake news identification. However, these techniques are inherently at odds with my advocacy for algorithmic transparency.
In machine learning, by definition, we don’t hard-code software to spot certain features. Rather we train it using sample data, allowing it to uncover underlying relationships and correlations for itself. A well-trained system is then in principle able to operate upon new data it hadn’t previously been exposed to, and identify similar features and patterns. The problem is that what the system has learned to see is not represented in human-readable form, nor even comprehensible to us, given its mathematical sophistication. If the original training data were to be manipulated, the system could easily be coaxed into intentionally exhibiting the biases of its trainers, which would be extremely difficult to identify by outsiders.
I don’t advocate against the use of machine learning techniques at all. However I very much advocate for recognising their incompatibility with the desire for complete transparency and openness, and the recognition that this establishes a direct avenue for manipulation.
The biggest obstacle of all to seeing through fact from fiction, is our own complacency, and desire to even do so. Given that in just minutes a Facebook or Twitter user can scroll through hundreds of posts, if establishing the reliability of a source requires opening multiple new browser windows to cross-check and research each one individually, it will undermine the user experience — the average user (especially those most vulnerable to influence by fake news) will not be bothered to.
The tools we develop for verifying reliability must accommodate for this as the most important design consideration, providing a fully integrated and user-friendly mechanism, which does not detract from the inherently addictive, slot-machine-like appeal of the social media experience. If the tools detract from the user experience, they will be rejected and become ineffective at a mass scale.
What interest does the State have in preventing Fake News? None, this is how they subsist. What they actually have a desire for is to selectively eliminate information which works against their interests.
In the presence of overwhelming white-noise, selective elimination is just as powerful as the creation of new misinformation.
Providing them with a mandate to restrict the information we are able to see (in the ‘public interest’ no less) is to grant them the right to conduct the 21st century equivalent of 1940’s book burning ceremonies. Needless to say, having established a mandate to hold the ceremonies, they will decide for themselves which books get burnt.
Rather than burn our books on our behalf, let us decide which ones we would like to read, but let us also develop trustworthy, reliable, and accessible tools for making that determination for ourselves. Admittedly, much of society is highly fallible and unreliable when it comes to making such self-determination. To those in positions of power this applies even more so, given that they necessarily have interests to pursue, and seek a centralised approach for that reason.
There is an important relationship between free people and those in power that must be maintained, whereby our freedoms will only be upheld if accountability is enforced. The latter is our responsibility, not theirs. To delegate the accountability process — of which the free-flow of information is the single most pivotal — to those being held to account, is to capitulate entirely, and voluntarily acquiesce to subservience via population control.
Most of the world’s internet users feel little need to rely on encryption, beyond when it is completely obvious and implemented by default (e.g when performing online banking etc.). But when it comes to personal communications, where traffic interception by the State is highly likely in some jurisdictions, an outright certainty in others, the average user takes the “I have nothing to hide” attitude.
Assume by default, whether you live in a ‘democracy’ or under overt fascism, that the State intercepts everything. Constitutional rights are a facade. They are not enforced by the State, but to hold them to account. The onus for enforcing them lies upon us.
In today’s world, where advanced machine learning technology is freely available to all, and implementable by those with even the most elementary technical expertise, this attitude is naive at best, and wilfully negligent at worst, based on the outdated notion that all the State has to gain from unencrypted communications, and the identities of those involved, is some obscure and seemingly unimportant piece of information (who cares what I had for breakfast, you say?). This is based on a completely forgivable, but fundamental misunderstanding of not just what information can be directly extracted, but which can be inferred from it.
What can be inferred about you, indirectly allows things to be inferred about others, and others...
We live in an era of incredibly advanced analytical techniques, backed by astronomical computing resources (in which the State has the monopoly), where based on nth degrees of separation, and the extraction of cross-correlations between who they are and what they say, anything and everything we say directly compromises the integrity of the rest of society.
The information that can be exploited includes, but not in the slightest limited to:
All of these things can subsequently be correlated with all other information held about you, and those you engage with, and similarly for them, extended onwards to arbitrary degree.
The simplest example of how this type of technology works is one we are all familiar with — online recommendation systems (including purchasing suggestions, and social media friend recommendations). By correlating the things you have previously expressed interest in (e.g via online purchases, or existing friendship connections, as well as theirs), advertisers can, with sometimes astonishing degrees of accuracy, anticipate seemingly unrelated products to put forth as suggestions of potential interest.
Alternately, I’m sure I’m not the only one who can attest to having received Facebook friend recommendations based on someone I had a conversation with at a bar, but with whom no digital information of any form was exchanged. But in fact we did — via the realtime correlation of our GPS coordinates, it can be inferred that we spent some time engaging with one another that couldn’t have been by coincidence.
But the depth of analysis that can be performed, can extend far beyond this, to infer almost unimaginably specific information about us.
Behind the scenes, the analysis behind this is incredibly sophisticated, and invisible to us, performing all manner of cross-correlations across multiple degrees of separation, or indeed across society as a whole, based on seemingly obscure information we’d never have given much thought to. This includes not just correlations with other things you have taken interest in (people you have communicated with, or products you have purchased), but also knowledge about which groups (defined arbitrarily — social, demographic, ideological, interest-based, whatever) you belong to, whether overtly specified or not, and the collective behaviour of the respective group. Mathematically, this can be revealed via membership to connected sub-graphs (graph cliques) within a social network graph.
Entire fields of mathematics and computer science, notably graph theory and topological data analysis, dedicate themselves to this pursuit. Machine learning techniques are perhaps the most versatile and useful of them all.
These identified sub-graphs can be associated as ‘groups’ (in any real-world or abstract sense), identified in some manner as having something in common, from which potentially entirely distinct characteristics of their constituents can be inferred via correlations across groups. Membership to multiple groups can quickly narrow down the specifics of individuals sitting at the intersection of groups.
This type of analysis is very much the crux of what modern machine learning does by design — advanced higher-order analysis of multivariate correlations, particularly well suited to social network graph analysis, where nodes represent individuals, and edges are weighted by advanced data-structures characterising all aspects of their relationships, far beyond just ‘who knows who’.
For clarity, many familiar with the term social network graph, interpret this in the context of the graph data-structures held by social media networks like Facebook. I am not. Nation states themselves hold advanced social network graph data-structures of their own, into which they have the capacity to feed all manner of information they obtain about you. There is very strong evidence that even in the so-called ‘Free World’, these major commercial players actively collaborate with the State, from which the State is able to construct meta-graphs comprising unforeseen amounts of personal information — even if it is ‘just’ metadata.
Much of the truly insightful information to be obtained from advanced graph analysis is not based upon local neighbourhood analysis (i.e you and the dude you just messaged), but by global analysis (i.e based collectively upon the information contained across the entire social graph, and its multitude of highly nontrivial interrelationships, most of which are not at all obvious upon inspection, that no human would ever conceive of taking into consideration, but advanced algorithms systemically will, without discrimination).
Machine learning is oblivious to laws or expectations against demographic profiling — racial, gender, political, medical history, or otherwise. Even if that data isn’t fed directly into the system (e.g via legal barriers), it’ll surely figure it out via correlation, with almost certainty.
For example, with access to someone’s Facebook connections, with current freely-available software libraries, an unsophisticated programmer could infer much about your demographics, political orientation, gender, occupation, and much more, with a high degree of accuracy in most cases, even if no information of the sort were provided on your profile directly. This elementary level of analysis can be implemented in 5 minutes with a few lines of code using modern tools and libraries. Needless to say, national signals intelligence (SIGINT) agencies have somewhat greater resources at their disposal than to hire a summer student for half an hour.
The Russian influence campaign prior to the Trump election is alleged to have actively exploited this exact kind of analysis, with an especial focus on demographic analysis (including geographic, gender, voting, ideological, group membership, and racial profiling), to create a highly individually targeted advertising campaign, whereby the political ads targeted against you may be of an entirely different nature to the ones received by the guy next door, individually calculated to maximise psychological response accordingly.
This technology demonstrably provides direct avenues for population control, in particular via psychological operations (PSYOPs). If it can be exploited against us by adversarial states, it can be exploited against us by our own. I work off the assumption that all states, including our own, are adversarial.
The political chaos this has created in the United States (regardless of whether or not the Russian influence campaign actually changed the election outcome) is testament to the legitimacy of the power and efficacy of these computational techniques. There would be little uproar over the incident, were it not regarded as being entirely plausible.
Similarly, it’s no secret at all that political parties in democratic countries rely on these kinds of analytic techniques for the purposes of vote optimisation, policy targeting, and gerrymandering. Indeed, commercial companies license specialised software optimisation tools to political operatives for exactly this purpose — it’s an entire highly successful business model. If politicians utilise this before entering office, you can be sure they’ll continue to do so upon entering office — with astronomically enhanced resources at their disposal, backed by the full power of the State, and the information it has access to.
It goes without saying that since the tools for performing these kinds of analyses are available to everybody, anywhere in the world, as freely downloadable software packages, and the entire business model of corporations like Google and Facebook is based upon it (and to be clear, that is the business model, into which astronomical resources are invested), what major nation state intelligence apparatuses (especially the joint power of the Five Eyes network, to which my home country of Australia belongs) have at their disposal, extends these capabilities into unimaginable territory, given the resources they have at their disposal (both computational and in terms of access to information).
The National Security Agency (NSA) of the United States, their primary signals intelligence agency, is allegedly the world’s largest employer of mathematicians, and also possesses incredible computing infrastructure. Historically, cryptography (a highly specialised mathematical discipline) has been the primary focus of this. In the era of machine learning, and what can be gained from it from an intelligence perspective, you can place bets this now forms a major component of their interest in advanced mathematics.
But the applications for this are not only applicable to catching terrorists (how many terrorist attacks have actually taken place on American soil to justify investments of this magnitude?). There is good reason that China is now a leading investor into AI technology, given their highly integrated social credit scheme, which has very little to do with terrorism, and far more to do with population control.
We cannot become a political replicate of the People’s Republic of China. But we probably are. In the same way that the Chinese people are in denial, so are we.
“But this couldn’t happen here! We are a ‘democracy’ after all?”
It’s now publicly well known that the NSA has a history of illegally obtaining and storing information on American citizens within their systems, in direct violation of the United States Constitution. It’s impossible to know via national security secrecy laws in what capacity it has been utilised, but the potential for misuse has utterly devastating implications for American citizens and the constitutional rights they believe in.
When applied from the nation state’s point of view (democracy or dictatorship, regardless), where the overriding objective is population control and the centralisation of power, the primary tool at their disposal is, and always has been, to manipulate and subjugate the people. With this kind of advanced analytic power at their disposal, their ability to do so is in fantastically post-Hitlerian territory. If Stalin were alive today, he would not subscribe to pornography magazines, he would subscribe to the Journal of Machine Learning, and spend evenings when his wife was absent, wearing nothing but underwear in front of a computer screen, salivating over the implementation of China’s social credit scheme, and its highly-integrated nationwide information feed, built upon a massive-scale computational backend, employing the techniques described above with almost certainty. He could have expanded the gulags tenfold.
But any kind of computational analysis necessarily requires input data upon which to perform the analysis. There are few computations one can perform upon nothing.
We have a responsibility to provide the State with nothing!
Let them eat cake.
Better yet, let them starve.
And may their graphs become disjoint.
When the State obtains information about your interactions, it adds information to their social graph. What Google and Facebook have on you, which many express grave concern about, is nothing by comparison. The enhancement of this data structure does not just compromise you, but all of society. It compromises our freedom itself.
Having prosperity and a quality of life is not freedom — Hitler achieved overwhelming support because the people thought so. Freedom means that at no point in time, under any circumstance, can they take it all away from us, or make the threat to do so. This is not the case, it never was, it likely never will be, yet it must be so.
In the interests of the formation of a Free State, and inhibiting the ever-increasing expansion of the police state, the extrapolation of which is complete subservience, we have a collective responsibility to:
All technologies can be used for good or for evil. There has not been a technology in history to which this hasn’t applied. The establishment of the internet itself faced enormous political opposition for highly overlapping reasons. Needless to say, the internet has been one of the most positively influential technological advances in human history, and one of the most individually liberating and empowering tools ever invented — an overwhelming force for freedom of information, expression, and unprecedented prosperity and opportunity across the globe.
I genuinely believe that the biggest threat humanity faces — far beyond terrorism, drugs, or pedophiles — is the power of the State.
History overwhelmingly substantiates this belief, and despite acknowledging the downsides, I make no apologies for offering my unwavering support to the crypto movement and what it seeks to achieve. The power asymmetry in the world has never been tilted in favour of terrorists — it always was, and always will be, in favour of the State, historically the greatest terrorists of them all.
One of the deepest and most eternally meaningful statements ever made in the history of political philosophy, came from one of its most nefarious actors,
“Voice or no voice, the people can always be brought to the bidding of the leaders. That is easy. All you have to do is tell them they are being attacked, and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same in any country.”— Hermann Göring
The UTS Centre for Quantum Software and Information (UTS:QSI) is seeking bright, enthusiastic students to join our Quantum PhD Event on September 16. Up to 20 travel awards will be presented to selected early-bird registrants living outside the Sydney Basin area.
A fully-funded PhD position is available at the University of Technology Sydney, Australia, to conduct forefront theoretical research in the quantum information sciences, working with Dr Peter Rohde within the Centre for Quantum Software & Information (QSI).
The research topics are flexible, including:
* Quantum machine learning
* Quantum cryptography (especially homomorphic encryption and blind quantum computing)
* Quantum networking (especially cloud quantum computing)
* Quantum computing (with an emphasis on optical quantum computing, boson-sampling and quantum random walks)
* Quantum information theory
* Quantum metrology
* Your own suggestions for exciting projects in quantum technology
The candidate should have a background in physics, computer science, engineering, mathematics or a related discipline, and be highly creative, independent and passionate about quantum technology. The duration of the position is for 3 years, and includes a scholarship for $25,861/year (tax free). The position is research-only, with no teaching or coursework obligations.
QSI is a leading research centre in the quantum information sciences, and the candidate will have the opportunity to collaborate with leading researchers within the centre, as well as with other researchers domestically and internationally. Sydney is home to several major quantum research centres, presenting outstanding local collaboration opportunities.
The candidate will conduct theoretical research to be published in international journals, present research findings at major conferences, and build collaboration networks. Travel opportunities will be available.
To apply for the position or request further information, please contact Dr Peter Rohde (email@example.com) by January 14. When applying, please provide a resume, academic record, contact details for two academic referees, and a statement of your research interests and passions. Applications are now open.
Please distribute this advert amongst your colleagues, students, mailing lists and Facebook groups.
- Ad astra per alas fideles. Scientia potentia est.
I'm pleased and honoured to announce that I have just been awarded a prestigious ARC Future Fellowship to conduct a 4 year project into quantum networking and encrypted quantum computation. I will be based at the University of Technology Sydney, where I have received tenure as a Senior Lecturer. Ad astra.
I oppose the death penalty. I oppose it per se. I oppose it regardless of the crime, and regardless who it is applied to. If, like me, you oppose the death penalty, oppose it outright, not because of the nationality of the victim.
Every year around the world thousands of people are put to death. Many are put to death via barbaric means for ‘crimes’ that shouldn’t be crimes. There are parts of the world where women are publicly stoned to death for the ‘crime’ of being a rape victim. There are places where women are drenched in acid until they are dead for the ‘crime’ of bringing shame upon their family. There are places where homosexuals are thrown from the roofs of ten story buildings for the ‘crime’ of being homosexual. People are put to death for changing religion, insulting their religion, or offending the leader of their country.
Where is the outrage and the media and political spectacle when these horrific forms of the death penalty are carried out? The politicians remain silent. The media says nothing. The general population don’t threaten travel embargoes or boycott products. And there are no candle-lit vigils on the victims' behalf. To stay silent whilst these kinds of acts are taking place, but then be outraged because two of the victims happen to be Australian, is effectively saying that the life of a guilty Australian criminal is worth more than the life of an innocent Saudi rape victim, or an innocent Iraqi homosexual teenager.
Oppose the death penalty - I do. But make your opposition to it consistent and not hypocritical. Oppose it because it’s wrong - always wrong. Don’t oppose it because the victim happens to be the same nationality as you.