Don’t stop Fake News

Given the rate of information flow in the social media generation, and the ability for information to go internationally viral in a matter of minutes — which only requires thoughtless button-clicking within a few degrees of separation — it’s undeniable that the propagation of Fake News poses a major threat. Whether it be malicious electoral interference, or the perpetration of nonsensical views on medicine, leading to the reemergence of deadly, but entirely preventable diseases, the implications are undeniably catastrophic, already have been, and pose a major threat to humanity.

For this reason it’s understandable that people want to put an end to it. Of course we don’t want measles (or Trump). But how do we achieve this? Many politicians around the world are pressuring social media giants to filter content to eliminate fake news, while others are advocating legislation to force them to.

I oppose such approaches outright, and believe they pave the way for even greater thought manipulation. (Interpret the terminology fake news prevention, as being synonymous with terrorists, drugs and pedophiles, as per my last article).

Most news is fake (or misrepresented)

What constitutes fake news, anyway? Given that even upon reading articles about the same event, as portrayed by two ideologically distinct, yet well-respected mainstream newspapers, the tilt can be so astronomical, with both sides criticising the other for bias and corruption, the notion of fakeness is hardly an objective one. When it comes to statements made by politicians it’s even more perverse.

There is no such thing as an unbiased media source, nor will any story we read have full access to all information, or the full background context, or be 100% verifiably correct. Essentially what propagates over the internet is a close approximation to white-noise. Applying the appropriate filter, you can extract any signal you want.

Any kind of enforcement of filtering or information suppression implies certain types of information being removed at the behest of those with the ability to do so. Those people are necessarily in positions of power and influence, and will pursue their own interests over the collective one. The ability to impose filtering, enables post-selection bias by mandate. In conjunction with the false sense of security that a filtering system creates, the outcome is even greater vulnerability to the self-reinforcement and confirmation biases we seek to avoid.

The pretext for power

The implications of the ability for those in power to manipulate this to their advantage is obvious, and the basis upon which totalitarian societies are built. Already in Singapore there have been deep concerns surrounding this, where anti-fake news legislation requires organisations to,

“carry corrections or remove content the government considers to be false, with penalties for perpetrators including prison terms of up to 10 years or fines up to S$1m.”

The term “the government considers to be false” is an illuminating one.

Once a mandate for filtering is established, its application cannot be confined to what is ‘fake’, nor can we trust those making that determination to wield this extraordinary power. With such a mandate in place, the parameters defining its implementation will evolve with the political agenda, likely via regulation than via legislation — isolating it entirely from any democratic oversight or debate. Regardless who is at the helm, be sure that it will be used to undermine those who are not. History substantiates this — it is why we hold them to account, rather than blindly trust them to do what is right.

How to fight fake news

Instead of relying on those with vested interests to take on fake news, we must arm ourselves to do it in their absence. We must do so in a manner that is collective, transparent, decentralised, and robust against malign political interference (i.e all political interference).

Education

By far the most powerful avenue towards combating fake news is for people being equipped with the skills to do so themselves. For this reason, the following should be taught to all, from the earliest possible age, including making them essential components of our education system:

  • Critical thinking and rationalism.
  • Recognising logical fallacies.
  • Elementary statistics and probability theory (even if only qualitatively at an early level).
  • Online research skills, and the difference between what constitutes research versus Googling to find the answer you want to believe (i.e confirmation bias — “I was trying to find out whether the Moon landing was a conspiracy, and came across this amazing post on 8chan by this guy who runs an anti-vax blog (he’s pretty high up) that provided a really comprehensive and thoughtful analysis of this! BTW, did you know that the CIA invented Hitler in the 60’s as a distraction from the Vietnam war? I fact-checked it with more Googling, and it works out.”).
  • Encouraging kids to take up debating in school, where these become essential skills.

Already Finland has reportedly had great success in pursuing precisely this approach at the school level, with similar discussions emerging in the UK and within the OECD. Finland’s approach (Nb: I don’t know the details of the curriculum), is foresighted and correct.

Algorithmically

Sometimes our ability to spot fakeness at a glance is challenging, and even the most mindful social media users will routinely fall for things, making software tools indispensable to a robust process. Certainly, modern analytical techniques could be employed for this purpose to reveal the reliability of information sources, usually with a high degree of accuracy. When it comes to social media giants applying fake news filters, this is inevitably the route that will be taken. It can’t possibly be done by hand.

If the purpose of such software tools is to make us aware of misleading information, then its manipulation provides an even more powerful avenue for misleading us than the underlying information itself, based on the false sense of security, and our own subsequent subconscious loosening of internal filtering standards.

To illustrate this, the exisiting social media giants, Facebook and Twitter, are already routinely accused of implementing their anti-hate-speech policies in a highly inconsistent and asymmetric manner. Everyone will have their own views on this, but from my own observations I agree with this assessment. Note that selectively preventing hate speech from one side, whilst not doing so for the other, is an implicit endorsement of the latter, tantamount to direct political support. This type of political support — the ability to freely communicate, and simultaneous denial of one’s opponents to do so — is the single greatest political asset one can have. The ability to platform and de-platform entire organisations or ideologies is the single most politically powerful position one can hold — it’s no coincidence that the first step taken under the formation of any totalitarian state, is centralised control of the media.

This implies that any tools we rely on for this purpose must be extremely open, transparent, understandable, and robust against intentional manipulation. In the same way that you would not employ a proprietary cryptographic algorithm for encrypting sensitive data, with no knowledge of its internal functioning, the same standard of trust must be applied when interpreting the reliability of information, yet alone outright filtering.

Simultaneously, these tools must be allowed to evolve and compete. If they are written behind closed doors by governments or by corporations, none of these criteria will be met. The tools cannot fall under any kind of political control, and must be decentralised and independent.

Tools based on community evaluation and consensus should be treated with caution, given their vulnerability to self-reinforcement via positive feedback loops of their own — a new echo-chamber. Indeed, this vulnerability is precisely the one that fake news exploits to go viral in the first place.

Will machine learning save us?

Identifying unreliable information sources is something that modern machine learning techniques are extremely well-suited to, and if implemented properly, would likely be our most useful tool in fact-checking and fake news identification. However, these techniques are inherently at odds with my advocacy for algorithmic transparency.

In machine learning, by definition, we don’t hard-code software to spot certain features. Rather we train it using sample data, allowing it to uncover underlying relationships and correlations for itself. A well-trained system is then in principle able to operate upon new data it hadn’t previously been exposed to, and identify similar features and patterns. The problem is that what the system has learned to see is not represented in human-readable form, nor even comprehensible to us, given its mathematical sophistication. If the original training data were to be manipulated, the system could easily be coaxed into intentionally exhibiting the biases of its trainers, which would be extremely difficult to identify by outsiders.

I don’t advocate against the use of machine learning techniques at all. However I very much advocate for recognising their incompatibility with the desire for complete transparency and openness, and the recognition that this establishes a direct avenue for manipulation.

Design for complacency

The biggest obstacle of all to seeing through fact from fiction, is our own complacency, and desire to even do so. Given that in just minutes a Facebook or Twitter user can scroll through hundreds of posts, if establishing the reliability of a source requires opening multiple new browser windows to cross-check and research each one individually, it will undermine the user experience — the average user (especially those most vulnerable to influence by fake news) will not be bothered to.

The tools we develop for verifying reliability must accommodate for this as the most important design consideration, providing a fully integrated and user-friendly mechanism, which does not detract from the inherently addictive, slot-machine-like appeal of the social media experience. If the tools detract from the user experience, they will be rejected and become ineffective at a mass scale.

Modern-day book burning

What interest does the State have in preventing Fake News? None, this is how they subsist. What they actually have a desire for is to selectively eliminate information which works against their interests.

In the presence of overwhelming white-noise, selective elimination is just as powerful as the creation of new misinformation.

Providing them with a mandate to restrict the information we are able to see (in the ‘public interest’ no less) is to grant them the right to conduct the 21st century equivalent of 1940’s book burning ceremonies. Needless to say, having established a mandate to hold the ceremonies, they will decide for themselves which books get burnt.

Rather than burn our books on our behalf, let us decide which ones we would like to read, but let us also develop trustworthy, reliable, and accessible tools for making that determination for ourselves. Admittedly, much of society is highly fallible and unreliable when it comes to making such self-determination. To those in positions of power this applies even more so, given that they necessarily have interests to pursue, and seek a centralised approach for that reason.

There is an important relationship between free people and those in power that must be maintained, whereby our freedoms will only be upheld if accountability is enforced. The latter is our responsibility, not theirs. To delegate the accountability process — of which the free-flow of information is the single most pivotal — to those being held to account, is to capitulate entirely, and voluntarily acquiesce to subservience via population control.

Leave a Reply