All posts by Peter Rohde

Quantum computer scientist, Mountaineer, Adventurer, Composer, Musician, Public Speaker, DJ

Sex Crimes vs War Crimes (on Seth Lloyd, Jeffrey Epstein & the Military-Industrial Complex)

Following the revelations, prosecution, and subsequent death of former billionaire Jeffrey Epstein, relating to his incredibly sinister history of sexual exploitation and trafficking of minors, and the multitude of high-profile names implicated in the countless and ongoing allegations, my own community — the quantum computing academic community — has even found itself being drawn in, via the research donations given to prominent MIT Professor Seth Lloyd by Jeffrey Epstein.

For those who don’t know, Seth is an extremely prolific and influential figure in our field, who has made a beyond-staggering academic contribution to our area of research.

Although there hasn’t been any suggestion (to my knowledge) that Seth was in any way involved in or supported Epstein's sexual depravity, following the revelations that he had accepted donations from Epstein to fund his research, who is currently on paid leave, many are calling for him to be dismissed from MIT outright for his lack of judgement in accepting these donations, including student-led protests against him.

I have no knowledge whatsoever of the nature of the personal relationship between the two, what they talked about when Seth visited Epstein in jail, or anything remotely along those lines. I want to avoid all of that altogether, because I’m simply not in a position to have an opinion on it, less so to express one.

I don’t personally know Seth very well, having only ever socialised with him on a few occasions at conferences overseas (of course, I know his academic work very well). Needless to say, I never knew Epstein at all. So none of this should be interpreted as some kind of underhanded attempt to 'stand up for a mate', or anything of the sort. There are no partiality issues at play here.

Having said this, what I want to raise is (in my mind) a very glaring moral equivalence between Seth's actions and something that is, moralistically, highly comparable, which people in our research community engage in all the time (and to be clear, I am no exception to this) — accepting money from major international defence contractors, where in many instances it is very well known they knowingly provide material support for war crimes and other crimes against humanity at a global scale, engage in war profiteering, and use their immense wealth to engage in extensive political lobbying to forever promote the expansion of this self-reinforcing agenda of permanent armed conflict.

They also happen to dish out tons of cash to researchers in forefront scientific areas, such as ours.

I recall the first time I accepted a university position directly funded by a major international defence contractor (they financed my entire salary at the time). I was extremely aware of their highly morally questionable history. Upon being offered the position, a point in my life at which I had few other career options, I genuinely emotionally and morally struggled with myself in ways I never had before (to the point of falling into a prolonged state of deep depression upon making the decision to accept it), and internally debated with myself about it for quite some time before coming to terms with it via the following conclusion:

So long as the research I am conducting using their money is open research, accessible to all, and not in any way kept secret for the select benefit of the financiers, then every dollar I accept from them is a dollar less spent on raining down missiles on some impoverished country, under illegal military assault or occupation. Surely it’s far better for me to take their cash and use it to advance science for the benefit of all, than let it contribute to rolling the next cruise missile off the production line?

I’ve thought about it a lot since, and I am still in retrospect very comfortable with the above moral justification, and would be open to accepting further such cash contributions from similar entities, assuming the caveats and conditions stated above remained in place.

Without having any special inside knowledge of the Lloyd vs Epstein case, what is clear to me is that there seems to be a significant moral equivalence between these two scenarios. As far as I'm aware (and do correct me if I'm wrong), all research conducted by Seth using Epstein's money was openly-published scientific research, where the funding source (Epstein) was acknowledged accordingly for financial support (as is the expected scientific norm, in the same way that defence contractors are acknowledged accordingly).

What I’m interested in hearing from those in academia (or outside for that matter), who receive money, directly or indirectly, from highly morally questionable defence sources (which is most of us at some point or another in our careers as quantum computer scientists), is what is fundamentally different between accepting money from sex criminals as opposed to war criminals, provided that the research is scientifically open, for all to access, and does not preferentially benefit the financier in any way?

One could indeed go further by pointing out that those accepting research funds from defence contractors knowingly engage in the following:

  • Accepting money from organisations known to promote and contribute to illegal wars.
  • Enhancing their reputation via the required funding source acknowledgements in published work.
  • Developing science and technology that may be of direct material benefit to their efforts.
  • Enhancing their networking and influence potential, via the provision of direct high-level access to upper university leadership.
  • Reporting on the latest scientific advancements, providing them with the intelligence to project a potential competitive edge.
  • Recognition within the academic community as the 'go-to people' to seek partnerships when major developments are made.
  • In some instances, the organisations provide direct guidance as to the nature of the research being undertaken (in others there are very few strings attached).

In the case of donations sourced from a private individual, much of this does not apply. Certainly, networking ability and reputational enhancement may be of benefit. Direct scientific and technological developments are highly unlikely to be — certainly not in any manner that would foreseeably benefit the depraved acts of someone like Epstein.

The second issue — that of Seth visiting Epstein in jail following his initial conviction — is one where I believe we should all be extremely ethically mindful of what the nature of that visit might have entailed. Were a friend of mine to end up in jail, for whatever reason, I'd almost certainly pay them a visit. That would not automatically imply that the visit was a tacit statement of endorsement — it could very well be entirely the opposite. Speaking to someone needn't at all imply it be positive, pleasant or supportive in nature. This is something that presumably none of us are in a position to pass full judgement on, based on lack of information. That's not to say I don't absolutely recognise that making such a visit at all brings with it enormous potential for a complete PR disaster (clearly that's exactly what followed).

I want to be absolutely clear that I'm not attempting to morally absolve or implicate anyone (Seth Lloyd, MIT, myself, my colleagues, our industry, nor the academic community at large), nor take sides. Rather, what I would like to promote is consistency in the way we view such issues, from a humanist perspective, both within academia and beyond, and hear sound and consistent arguments as to why Seth Lloyd's decision to accept research funding from a sex criminal is inherently different to (or indeed worse than) the far more common, and accepted, practise of accepting research money from known war criminals and war profiteers (which most in my industry are guilty of — especially those at the top).

In terms of the way in which I have personally morally justified accepting money (under appropriate constraints) from war profiteers, why should a similar moral justification not apply more generally, for example to the scenario presently involving Seth Lloyd?

If Lloyd is to lose his job for having used the money from a known sex criminal for the purposes of open scientific research, should the rest of us also lose ours for accepting money derived from war profiteers, who support the violation of international law, knowingly enable war crimes, and other crimes against humanity?

Frankly, those of us who have, have far more to answer for. And I, like most, am one of them.

Nb: I realise that writing anything whatsoever on this particular topic at the present moment is incredibly dangerous territory to wade into. Given the nature of the crimes committed by Epstein, any discussion of this topic has tremendous potential to cause enormous hurt to countless people. I really do want to make this clear, and I mean this as genuinely as I possibly can, that in writing this the absolute last thing I want to do is come across as trivialising the depravity of Epstein, or turning a blind eye to it. For very personal reasons, the crimes Epstein committed are ones that are deeply emotionally upsetting to me. If any reader interprets this post as dismissive or trivialising in tone, let me assure you that's not at all what was intended. My intention is very different to that — depraved sex criminals aren't the only criminals in the world, and if we are to take a strong moral stance against criminal depravity, and ensure that scientific research funding is sparkling clean, it should be applied in a self-consistent and uniform manner. To all the victims of Epstein, and those like him, you have my unwavering support.

In Australia, call Lifeline (13 11 14) if these issues affect you. Similar free and confidential services are available in many other jurisdictions around the world.

Happy New Year & thank you to our Firies!

Sydney fireworks (2019-2020), taken from Blues Point Tower.

I'm glad Sydney went ahead with this, while giving my absolute respect to those who have perished or lost their homes in the surrounding fires. As I watched in amazement at the display, I chose to dedicate that time to reflecting on my gratitude to the RFS volunteers. A celebration needn’t be disrespectful. It can be used to show gratitude too. Perhaps the City of Sydney should have made such a dedication. Thank you RFS.

Referee request response (decline)

Dear Editor,

Thank you for your invitation to review this manuscript for your journal. Unfortunately, I must decline the invitation given that, as a matter of principle, I do not support or endorse the activities of for-profit scientific journals.

The scientific community has previously offered this industry, free of charge:

  • Conducting all scientific research.
  • Writing all scientific manuscripts.
  • Acting voluntarily in editorial roles.
  • Performing all refereeing.
  • (i.e the entire workload of your organisation, other than hosting the website on which you serve the PDFs).

In exchange, we receive:

  • Massive journal subscription fees.
  • Article download fees.
  • Article publication fees.
  • Intimidation tactics employed against us when we prefer not to be a part of it.
  • Anti-competitive and financially predatory distribution tactics.
  • Institutionalised mandates for the above.

This is not a symbiotic relationship, but a parasitic one, for the larger part financed by the taxpayer, who should rather be financing our research. I can no longer endorse this one-sided relationship, in which for-profit journals effectively tax scientific research, to the tune of billions of dollars annually, often using coercive and intimidatory sales tactics, whilst providing very little or no value in return. This capital is best spent on what it was intended for — scientific research for the benefit of humankind — training students, hiring research staff, financing equipment, travel and infrastructure — to which your organisation contributes nothing whatsoever other than to extort value.

In addition to declining this offer, please for future reference:

  • Remove my name from your referee database.
  • Immediately cease and desist from using intimidatory tactics when I decline to volunteer my labour (which is of very high value) to your pursuit of profit (in exchange for nothing).
  • Hassling me for failing to voluntarily contribute my labour to your revenue-raising is tantamount to harassment and extortion.
  • Do not request that I voluntarily act as your journal editor.
  • Do not work in cahoots with national scientific funding agencies to enforce your own vendor lock in, thereby effectively mandating your own services, which are in fact of very little or no value whatsoever. This in an indirect form of taxation upon scientific research, which I have no interest in paying, and which we should be expected or forced to.
  • I do not intend personally to submit any further manuscripts to your journal for consideration (if my co-authors do, I won’t stand in their way).

Personal note to the Editor: this should not be construed as a personal attack against you, who I absolutely respect, but rather against the industry which is exploiting you in a slave-like work relationship, whilst using you as a conduit to engage me for the same purpose. I write this as an act of solidarity with you, not as a personal attack against you.

We advance human knowledge for the benefit of humanity, and provide it as a gift for all.

Referee 2.

(This post may be freely linked to, reused, or modified without acknowledgement)

New paper: Photonic quantum error correction of linear optics using W-state encoding

With my PhD student Madhav Krishnan Vijayan, and old PhD colleague Austin Lund.

Full paper available at


Error-detection and correction are necessary prerequisites for any scalable quantum computing architecture. Given the inevitability of unwanted physical noise in quantum systems and the propensity for errors to spread as computations proceed, computational outcomes can become substantially corrupted. This observation applies regardless of the choice of physical implementation. In the context of photonic quantum information processing, there has recently been much interest in passive linear optics quantum computing, which includes boson-sampling, as this model eliminates the highly-challenging requirements for feed-forward via fast, active control. That is, these systems are passive by definition. In usual scenarios, error detection and correction techniques are inherently active, making them incompatible with this model, arousing suspicion that physical error processes may be an insurmountable obstacle. Here we explore a photonic error-detection technique, based on W-state encoding of photonic qubits, which is entirely passive, based on post-selection, and compatible with these near-term photonic architectures of interest. We show that this W-state redundant encoding techniques enables the suppression of dephasing noise on photonic qubits via simple fan-out style operations, implemented by optical Fourier transform networks, which can be readily realised today. The protocol effectively maps dephasing noise into heralding failures, with zero failure probability in the ideal no-noise limit.


Meet Peter Rohde, an Australian Research Council Future Fellow in the Centre for Quantum Software & Information at the University of Technology, Sydney. His theoretical proposals have inspired several world-leading experimental efforts in optical quantum information processing

As a collaborator in China’s world-first quantum satellite program, he aided the design of quantum protocols for space-based demonstration. Rohde has worked at highly acclaimed institutes such as the University of Oxford and Institute for Molecular Biosciences, with over 60 publications and 1,500+ citations in quantum optics, quantum information theory, ecology, and politics.

Learn more about the world of quantum computing through the eyes of Peter Rohde. Grab your tiks.

Don’t stop Fake News

Given the rate of information flow in the social media generation, and the ability for information to go internationally viral in a matter of minutes — which only requires thoughtless button-clicking within a few degrees of separation — it’s undeniable that the propagation of Fake News poses a major threat. Whether it be malicious electoral interference, or the perpetration of nonsensical views on medicine, leading to the reemergence of deadly, but entirely preventable diseases, the implications are undeniably catastrophic, already have been, and pose a major threat to humanity.

For this reason it’s understandable that people want to put an end to it. Of course we don’t want measles (or Trump). But how do we achieve this? Many politicians around the world are pressuring social media giants to filter content to eliminate fake news, while others are advocating legislation to force them to.

I oppose such approaches outright, and believe they pave the way for even greater thought manipulation. (Interpret the terminology fake news prevention, as being synonymous with terrorists, drugs and pedophiles, as per my last article).

Most news is fake (or misrepresented)

What constitutes fake news, anyway? Given that even upon reading articles about the same event, as portrayed by two ideologically distinct, yet well-respected mainstream newspapers, the tilt can be so astronomical, with both sides criticising the other for bias and corruption, the notion of fakeness is hardly an objective one. When it comes to statements made by politicians it’s even more perverse.

There is no such thing as an unbiased media source, nor will any story we read have full access to all information, or the full background context, or be 100% verifiably correct. Essentially what propagates over the internet is a close approximation to white-noise. Applying the appropriate filter, you can extract any signal you want.

Any kind of enforcement of filtering or information suppression implies certain types of information being removed at the behest of those with the ability to do so. Those people are necessarily in positions of power and influence, and will pursue their own interests over the collective one. The ability to impose filtering, enables post-selection bias by mandate. In conjunction with the false sense of security that a filtering system creates, the outcome is even greater vulnerability to the self-reinforcement and confirmation biases we seek to avoid.

The pretext for power

The implications of the ability for those in power to manipulate this to their advantage is obvious, and the basis upon which totalitarian societies are built. Already in Singapore there have been deep concerns surrounding this, where anti-fake news legislation requires organisations to,

“carry corrections or remove content the government considers to be false, with penalties for perpetrators including prison terms of up to 10 years or fines up to S$1m.”

The term “the government considers to be false” is an illuminating one.

Once a mandate for filtering is established, its application cannot be confined to what is ‘fake’, nor can we trust those making that determination to wield this extraordinary power. With such a mandate in place, the parameters defining its implementation will evolve with the political agenda, likely via regulation than via legislation — isolating it entirely from any democratic oversight or debate. Regardless who is at the helm, be sure that it will be used to undermine those who are not. History substantiates this — it is why we hold them to account, rather than blindly trust them to do what is right.

How to fight fake news

Instead of relying on those with vested interests to take on fake news, we must arm ourselves to do it in their absence. We must do so in a manner that is collective, transparent, decentralised, and robust against malign political interference (i.e all political interference).


By far the most powerful avenue towards combating fake news is for people being equipped with the skills to do so themselves. For this reason, the following should be taught to all, from the earliest possible age, including making them essential components of our education system:

  • Critical thinking and rationalism.
  • Recognising logical fallacies.
  • Elementary statistics and probability theory (even if only qualitatively at an early level).
  • Online research skills, and the difference between what constitutes research versus Googling to find the answer you want to believe (i.e confirmation bias — “I was trying to find out whether the Moon landing was a conspiracy, and came across this amazing post on 8chan by this guy who runs an anti-vax blog (he’s pretty high up) that provided a really comprehensive and thoughtful analysis of this! BTW, did you know that the CIA invented Hitler in the 60’s as a distraction from the Vietnam war? I fact-checked it with more Googling, and it works out.”).
  • Encouraging kids to take up debating in school, where these become essential skills.

Already Finland has reportedly had great success in pursuing precisely this approach at the school level, with similar discussions emerging in the UK and within the OECD. Finland’s approach (Nb: I don’t know the details of the curriculum), is foresighted and correct.


Sometimes our ability to spot fakeness at a glance is challenging, and even the most mindful social media users will routinely fall for things, making software tools indispensable to a robust process. Certainly, modern analytical techniques could be employed for this purpose to reveal the reliability of information sources, usually with a high degree of accuracy. When it comes to social media giants applying fake news filters, this is inevitably the route that will be taken. It can’t possibly be done by hand.

If the purpose of such software tools is to make us aware of misleading information, then its manipulation provides an even more powerful avenue for misleading us than the underlying information itself, based on the false sense of security, and our own subsequent subconscious loosening of internal filtering standards.

To illustrate this, the exisiting social media giants, Facebook and Twitter, are already routinely accused of implementing their anti-hate-speech policies in a highly inconsistent and asymmetric manner. Everyone will have their own views on this, but from my own observations I agree with this assessment. Note that selectively preventing hate speech from one side, whilst not doing so for the other, is an implicit endorsement of the latter, tantamount to direct political support. This type of political support — the ability to freely communicate, and simultaneous denial of one’s opponents to do so — is the single greatest political asset one can have. The ability to platform and de-platform entire organisations or ideologies is the single most politically powerful position one can hold — it’s no coincidence that the first step taken under the formation of any totalitarian state, is centralised control of the media.

This implies that any tools we rely on for this purpose must be extremely open, transparent, understandable, and robust against intentional manipulation. In the same way that you would not employ a proprietary cryptographic algorithm for encrypting sensitive data, with no knowledge of its internal functioning, the same standard of trust must be applied when interpreting the reliability of information, yet alone outright filtering.

Simultaneously, these tools must be allowed to evolve and compete. If they are written behind closed doors by governments or by corporations, none of these criteria will be met. The tools cannot fall under any kind of political control, and must be decentralised and independent.

Tools based on community evaluation and consensus should be treated with caution, given their vulnerability to self-reinforcement via positive feedback loops of their own — a new echo-chamber. Indeed, this vulnerability is precisely the one that fake news exploits to go viral in the first place.

Will machine learning save us?

Identifying unreliable information sources is something that modern machine learning techniques are extremely well-suited to, and if implemented properly, would likely be our most useful tool in fact-checking and fake news identification. However, these techniques are inherently at odds with my advocacy for algorithmic transparency.

In machine learning, by definition, we don’t hard-code software to spot certain features. Rather we train it using sample data, allowing it to uncover underlying relationships and correlations for itself. A well-trained system is then in principle able to operate upon new data it hadn’t previously been exposed to, and identify similar features and patterns. The problem is that what the system has learned to see is not represented in human-readable form, nor even comprehensible to us, given its mathematical sophistication. If the original training data were to be manipulated, the system could easily be coaxed into intentionally exhibiting the biases of its trainers, which would be extremely difficult to identify by outsiders.

I don’t advocate against the use of machine learning techniques at all. However I very much advocate for recognising their incompatibility with the desire for complete transparency and openness, and the recognition that this establishes a direct avenue for manipulation.

Design for complacency

The biggest obstacle of all to seeing through fact from fiction, is our own complacency, and desire to even do so. Given that in just minutes a Facebook or Twitter user can scroll through hundreds of posts, if establishing the reliability of a source requires opening multiple new browser windows to cross-check and research each one individually, it will undermine the user experience — the average user (especially those most vulnerable to influence by fake news) will not be bothered to.

The tools we develop for verifying reliability must accommodate for this as the most important design consideration, providing a fully integrated and user-friendly mechanism, which does not detract from the inherently addictive, slot-machine-like appeal of the social media experience. If the tools detract from the user experience, they will be rejected and become ineffective at a mass scale.

Modern-day book burning

What interest does the State have in preventing Fake News? None, this is how they subsist. What they actually have a desire for is to selectively eliminate information which works against their interests.

In the presence of overwhelming white-noise, selective elimination is just as powerful as the creation of new misinformation.

Providing them with a mandate to restrict the information we are able to see (in the ‘public interest’ no less) is to grant them the right to conduct the 21st century equivalent of 1940’s book burning ceremonies. Needless to say, having established a mandate to hold the ceremonies, they will decide for themselves which books get burnt.

Rather than burn our books on our behalf, let us decide which ones we would like to read, but let us also develop trustworthy, reliable, and accessible tools for making that determination for ourselves. Admittedly, much of society is highly fallible and unreliable when it comes to making such self-determination. To those in positions of power this applies even more so, given that they necessarily have interests to pursue, and seek a centralised approach for that reason.

There is an important relationship between free people and those in power that must be maintained, whereby our freedoms will only be upheld if accountability is enforced. The latter is our responsibility, not theirs. To delegate the accountability process — of which the free-flow of information is the single most pivotal — to those being held to account, is to capitulate entirely, and voluntarily acquiesce to subservience via population control.

Encryption & anonymity is a responsibility not a right – In defence of cryptoanarchy

Most of the world’s internet users feel little need to rely on encryption, beyond when it is completely obvious and implemented by default (e.g when performing online banking etc.). But when it comes to personal communications, where traffic interception by the State is highly likely in some jurisdictions, an outright certainty in others, the average user takes the “I have nothing to hide” attitude.

Assume by default, whether you live in a ‘democracy’ or under overt fascism, that the State intercepts everything. Constitutional rights are a facade. They are not enforced by the State, but to hold them to account. The onus for enforcing them lies upon us.

In today’s world, where advanced machine learning technology is freely available to all, and implementable by those with even the most elementary technical expertise, this attitude is naive at best, and wilfully negligent at worst, based on the outdated notion that all the State has to gain from unencrypted communications, and the identities of those involved, is some obscure and seemingly unimportant piece of information (who cares what I had for breakfast, you say?). This is based on a completely forgivable, but fundamental misunderstanding of not just what information can be directly extracted, but which can be inferred from it.

What can be inferred about you, indirectly allows things to be inferred about others, and others...

We live in an era of incredibly advanced analytical techniques, backed by astronomical computing resources (in which the State has the monopoly), where based on nth degrees of separation, and the extraction of cross-correlations between who they are and what they say, anything and everything we say directly compromises the integrity of the rest of society.

The information that can be exploited includes, but not in the slightest limited to:

  • Message content itself.
  • When it was sent and received.
  • Who the sender and receiver were.
  • Their respective geographic locations, including realtime GPS coordinates.
  • More generally, everything included under the generic term metadata.
  • And far more things than I dare imagine...

All of these things can subsequently be correlated with all other information held about you, and those you engage with, and similarly for them, extended onwards to arbitrary degree.

The simplest example of how this type of technology works is one we are all familiar with — online recommendation systems (including purchasing suggestions, and social media friend recommendations). By correlating the things you have previously expressed interest in (e.g via online purchases, or existing friendship connections, as well as theirs), advertisers can, with sometimes astonishing degrees of accuracy, anticipate seemingly unrelated products to put forth as suggestions of potential interest.

Alternately, I’m sure I’m not the only one who can attest to having received Facebook friend recommendations based on someone I had a conversation with at a bar, but with whom no digital information of any form was exchanged. But in fact we did — via the realtime correlation of our GPS coordinates, it can be inferred that we spent some time engaging with one another that couldn’t have been by coincidence.

But the depth of analysis that can be performed, can extend far beyond this, to infer almost unimaginably specific information about us.

Behind the scenes, the analysis behind this is incredibly sophisticated, and invisible to us, performing all manner of cross-correlations across multiple degrees of separation, or indeed across society as a whole, based on seemingly obscure information we’d never have given much thought to. This includes not just correlations with other things you have taken interest in (people you have communicated with, or products you have purchased), but also knowledge about which groups (defined arbitrarily — social, demographic, ideological, interest-based, whatever) you belong to, whether overtly specified or not, and the collective behaviour of the respective group. Mathematically, this can be revealed via membership to connected sub-graphs (graph cliques) within a social network graph.

Entire fields of mathematics and computer science, notably graph theory and topological data analysis, dedicate themselves to this pursuit. Machine learning techniques are perhaps the most versatile and useful of them all.

These identified sub-graphs can be associated as ‘groups’ (in any real-world or abstract sense), identified in some manner as having something in common, from which potentially entirely distinct characteristics of their constituents can be inferred via correlations across groups. Membership to multiple groups can quickly narrow down the specifics of individuals sitting at the intersection of groups.

This type of analysis is very much the crux of what modern machine learning does by design — advanced higher-order analysis of multivariate correlations, particularly well suited to social network graph analysis, where nodes represent individuals, and edges are weighted by advanced data-structures characterising all aspects of their relationships, far beyond just ‘who knows who’.

For clarity, many familiar with the term social network graph, interpret this in the context of the graph data-structures held by social media networks like Facebook. I am not. Nation states themselves hold advanced social network graph data-structures of their own, into which they have the capacity to feed all manner of information they obtain about you. There is very strong evidence that even in the so-called ‘Free World’, these major commercial players actively collaborate with the State, from which the State is able to construct meta-graphs comprising unforeseen amounts of personal information — even if it is ‘just’ metadata.

Much of the truly insightful information to be obtained from advanced graph analysis is not based upon local neighbourhood analysis (i.e you and the dude you just messaged), but by global analysis (i.e based collectively upon the information contained across the entire social graph, and its multitude of highly nontrivial interrelationships, most of which are not at all obvious upon inspection, that no human would ever conceive of taking into consideration, but advanced algorithms systemically will, without discrimination).

Machine learning is oblivious to laws or expectations against demographic profiling — racial, gender, political, medical history, or otherwise. Even if that data isn’t fed directly into the system (e.g via legal barriers), it’ll surely figure it out via correlation, with almost certainty.

For example, with access to someone’s Facebook connections, with current freely-available software libraries, an unsophisticated programmer could infer much about your demographics, political orientation, gender, occupation, and much more, with a high degree of accuracy in most cases, even if no information of the sort were provided on your profile directly. This elementary level of analysis can be implemented in 5 minutes with a few lines of code using modern tools and libraries. Needless to say, national signals intelligence (SIGINT) agencies have somewhat greater resources at their disposal than to hire a summer student for half an hour.

The Russian influence campaign prior to the Trump election is alleged to have actively exploited this exact kind of analysis, with an especial focus on demographic analysis (including geographic, gender, voting, ideological, group membership, and racial profiling), to create a highly individually targeted advertising campaign, whereby the political ads targeted against you may be of an entirely different nature to the ones received by the guy next door, individually calculated to maximise psychological response accordingly.

This technology demonstrably provides direct avenues for population control, in particular via psychological operations (PSYOPs). If it can be exploited against us by adversarial states, it can be exploited against us by our own. I work off the assumption that all states, including our own, are adversarial.

The political chaos this has created in the United States (regardless of whether or not the Russian influence campaign actually changed the election outcome) is testament to the legitimacy of the power and efficacy of these computational techniques. There would be little uproar over the incident, were it not regarded as being entirely plausible.

Similarly, it’s no secret at all that political parties in democratic countries rely on these kinds of analytic techniques for the purposes of vote optimisation, policy targeting, and gerrymandering. Indeed, commercial companies license specialised software optimisation tools to political operatives for exactly this purpose — it’s an entire highly successful business model. If politicians utilise this before entering office, you can be sure they’ll continue to do so upon entering office — with astronomically enhanced resources at their disposal, backed by the full power of the State, and the information it has access to.

It goes without saying that since the tools for performing these kinds of analyses are available to everybody, anywhere in the world, as freely downloadable software packages, and the entire business model of corporations like Google and Facebook is based upon it (and to be clear, that is the business model, into which astronomical resources are invested), what major nation state intelligence apparatuses (especially the joint power of the Five Eyes network, to which my home country of Australia belongs) have at their disposal, extends these capabilities into unimaginable territory, given the resources they have at their disposal (both computational and in terms of access to information).

The National Security Agency (NSA) of the United States, their primary signals intelligence agency, is allegedly the world’s largest employer of mathematicians, and also possesses incredible computing infrastructure. Historically, cryptography (a highly specialised mathematical discipline) has been the primary focus of this. In the era of machine learning, and what can be gained from it from an intelligence perspective, you can place bets this now forms a major component of their interest in advanced mathematics.

But the applications for this are not only applicable to catching terrorists (how many terrorist attacks have actually taken place on American soil to justify investments of this magnitude?). There is good reason that China is now a leading investor into AI technology, given their highly integrated social credit scheme, which has very little to do with terrorism, and far more to do with population control.

We cannot become a political replicate of the People’s Republic of China. But we probably are. In the same way that the Chinese people are in denial, so are we.

“But this couldn’t happen here! We are a ‘democracy’ after all?”

It’s now publicly well known that the NSA has a history of illegally obtaining and storing information on American citizens within their systems, in direct violation of the United States Constitution. It’s impossible to know via national security secrecy laws in what capacity it has been utilised, but the potential for misuse has utterly devastating implications for American citizens and the constitutional rights they believe in.

When applied from the nation state’s point of view (democracy or dictatorship, regardless), where the overriding objective is population control and the centralisation of power, the primary tool at their disposal is, and always has been, to manipulate and subjugate the people. With this kind of advanced analytic power at their disposal, their ability to do so is in fantastically post-Hitlerian territory. If Stalin were alive today, he would not subscribe to pornography magazines, he would subscribe to the Journal of Machine Learning, and spend evenings when his wife was absent, wearing nothing but underwear in front of a computer screen, salivating over the implementation of China’s social credit scheme, and its highly-integrated nationwide information feed, built upon a massive-scale computational backend, employing the techniques described above with almost certainty. He could have expanded the gulags tenfold.

But any kind of computational analysis necessarily requires input data upon which to perform the analysis. There are few computations one can perform upon nothing.

We have a responsibility to provide the State with nothing!
Let them eat cake.
Better yet, let them starve.
And may their graphs become disjoint.

When the State obtains information about your interactions, it adds information to their social graph. What Google and Facebook have on you, which many express grave concern about, is nothing by comparison. The enhancement of this data structure does not just compromise you, but all of society. It compromises our freedom itself.

Having prosperity and a quality of life is not freedom — Hitler achieved overwhelming support because the people thought so. Freedom means that at no point in time, under any circumstance, can they take it all away from us, or make the threat to do so. This is not the case, it never was, it likely never will be, yet it must be so.

In the interests of the formation of a Free State, and inhibiting the ever-increasing expansion of the police state, the extrapolation of which is complete subservience, we have a collective responsibility to:

  • Understand and inform ourselves about digital security, including encryption and anonymity.
  • Ensure full utilisation of these technologies by default.
  • Be aware of the extent of what modern machine learning techniques can reveal about ourselves, others, and all of society.
  • Be aware of how such collective information can be used against us by the State, assuming the worst case scenario by default.
  • Employ reliable and strong end-to-end encryption technologies, where possible, no matter how obscure our communications.
  • Conceal our identities during communications where possible.
  • Provide the State with nothing beyond what is reasonably necessary.
  • Oppose unnecessary forms of government data acquisition.
  • Oppose the integration of government databases and information systems.
  • Enforce ethical separations in data-sharing between government departments.
  • Legislate criminal offences for government entities misusing or falsely obtaining personal data.
  • Offer full support — financial, via contribution to development, or otherwise — to worthy open-source initiatives seeking to facilitate these objectives.
  • Do not trust ‘closed’ (e.g commercial or state-backed) security, relying instead on reputable open-source options. Commercial enterprises necessarily comply with the regulations and laws of the State (regardless which). To the contrary, open-source development is inherently transparent, offering full disclosure and openness.
  • Work off the assumption that any attempts by the State to seek backdoors, prohibit, or in any way compromise the above, to be a direct attempt to subvert freedom and work towards the formation of a totalitarian state.
  • Metadata contributes to the State’s social graph. Even in the absence of content, it establishes identities, relationships, timestamps, and geographic locations. These contribute enormously to correlation analysis with other information in the graph.
  • Be absolutely clear that any political statements to the effect of “we’re only able to access metadata” are made with full knowledge and absolute understanding of the implications of the above, and are deeply cynical ploys. They would otherwise not seek access to it.
  • Any political statements seeking any kind of justification based on highly emotive words such as terrorism, safety, protection, national security, stopping drugs, or catching pedophiles, should be treated with contempt by default, and assumed to be calculated attempts, via emotional manipulation, to subvert a free society and centralise power.
  • Modern history should be made a mandatory subject throughout primary and secondary education, in which case nothing stated here is even mildly controversial or requires any further substantiation. For this reason, no references are provided in this post.

All technologies can be used for good or for evil. There has not been a technology in history to which this hasn’t applied. The establishment of the internet itself faced enormous political opposition for highly overlapping reasons. Needless to say, the internet has been one of the most positively influential technological advances in human history, and one of the most individually liberating and empowering tools ever invented — an overwhelming force for freedom of information, expression, and unprecedented prosperity and opportunity across the globe.

I genuinely believe that the biggest threat humanity faces — far beyond terrorism, drugs, or pedophiles — is the power of the State.

History overwhelmingly substantiates this belief, and despite acknowledging the downsides, I make no apologies for offering my unwavering support to the crypto movement and what it seeks to achieve. The power asymmetry in the world has never been tilted in favour of terrorists — it always was, and always will be, in favour of the State, historically the greatest terrorists of them all.

One of the deepest and most eternally meaningful statements ever made in the history of political philosophy, came from one of its most nefarious actors,

“Voice or no voice, the people can always be brought to the bidding of the leaders. That is easy. All you have to do is tell them they are being attacked, and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same in any country.”

— Hermann Göring