By Guest Author 22/03/2019


Dr Paul Ralph

By live-streaming the Christchurch mosque shooting, Facebook has drawn our collective attention, once again, to the role of social media in mass shootings and terrorism.

Each time something like this happens, the tech giants claim to be doing everything they can, the tech pundits bemoan the problem as intractable, and the politicians fail to hold the corporations responsible for their role in these tragedies.

But social media companies are not doing everything they can, the problem is not intractable, and our leaders can hold these companies accountable.

What is the problem?

Social media plays three roles in terrorism and hate speech:

  1. Some of the world’s worst people use social media platforms to find each other, communicate, collaborate, reinforce each other’s views, and recruit new members.
  2. Social media allows people to distribute and amplify hate speech and propaganda in text, images, audio and video.
  3. In this case, the attack was streamed live on Facebook.

New Zealand’s Human Rights Act and similar legislation in many other countries bans publishing, distributing and broadcasting hate speech. It is difficult for us non-lawyers, in my opinion, to imagine how broadcasting a terrorist attack or spreading neo-Nazi propaganda could be exempt from these laws.

What should Facebook do?

Facebook has created a machine that can be used as a tool to distribute and amplify hate speech. Facebook’s algorithms determine what appears in user’s news feeds, so Facebook cannot claim to be just a platform.

  1. Facebook could disable live-streaming until it discovers a reliable method of preventing the live-streaming of violent crimes.
  2. Facebook should replace its current newsfeed with a simple chronological list of friends’ posts until it discovers a reliable method of preventing messages of hate from being amplified by its algorithms.
  3. Facebook should fund extensive research into automatically recognizing hate speech and propaganda in text, images, audio and video. Facebook made over $US50 billion last year. They could give million dollar grants to 5000 computer scientists to study this problem, and that would still be less than 10% of a year’s profits.
  4. Facebook should verify all accounts by requiring users to upload identification. Right now, if a user is banned, they can simply create a new account. Only by linking accounts to actual people can those who post hate speech be permanently removed. Facebook can then much more aggressively crack down on users and groups who share hate speech.

While the ability to speak anonymously is crucial for a free society, Facebook does not have to be a vehicle for anonymous speech and anonymous speech does not have to include live-streaming. This is not like voter ID. You don’t have a right to use Facebook and if you don’t want to give over your ID, you don’t have to have an account. Verifying accounts allows social media sites to limit abusable features to verified, established accounts, which could significantly hinder and disrupt the spread of hate speech and videos of violent crimes.

What should YouTube do?

YouTube is arguably worse: it has created a machine that enables the radicalisation of vulnerable people by recommending malicious propaganda videos. Its recommendation system is easily gamed to promote radical content and thereby radicalise gullible viewers. What’s worse, YouTube inadvertently pays malicious actors to produce and share messages of hate.

  1. YouTube should disable its recommendation system until it discovers a reliable method of preventing videos containing hate speech from being recommended.
  2. Like Facebook, YouTube should disable live-streaming, fund extensive research by independent third parties, verify accounts and permanently ban perpetrators.
  3. In situations where malicious actors are spamming the site with copy after copy of a video like that of the Christchurch shooting, Youtube should shut down distribution by temporarily suspending all video uploads.

What about other social media?

Despite getting the most press, Facebook and YouTube are not the worst offenders. The worst offenders are smaller sites like 4chan and 8chan. These sites could benefit from many of the same suggestions above, but let’s face it, they don’t seem very interested in tacking this problem. Which brings us to our next question:

What should society do?

New Zealand is a small country, and the social media giants are not headquartered here. However, as the EU General Data Protection Regulation shows, a country can regulate giant corporations headquartered elsewhere.

  1. New Zealand could levy enormous fines (e.g. one year’s profits) on social media companies for violating its Human Rights Act.
  2. Any companies that don’t pay could have their IP addresses blocked from the country.
  3. The distribution and amplification of hate speech and videos of violent crimes could be added to the criminal code. This is a delicate matter. Hate speech has to be clearly defined as celebrating or inciting violence against a visible minority. Vigorous criticism, off-colour jokes, and generally offending, insulting or upsetting people should not be classified as hate speech. However, this step is necessary for our next and most crucial recommendation.
  4. Leaders of social media companies could be indicted, extradited, tried, convicted, and imprisoned. Widespread criticism does not seem sufficient to motivate these companies to change their ways. Maybe jail time will.

Be bold

Some of these suggestions may seem bold. Social media companies may claim that they are inconvenient, break aspects of their sites or undermine benefits for some users. Such claims are absurd. Laws don’t apply when it’s convenient. It’s inconvenient for a local man to go to prison for robbing a liquor store, but we don’t care because punishment isn’t supposed to be convenient.

If a software feature cannot be implemented in a way that doesn’t break the law, don’t implement it at all. If social media sites will not comply with the law, then we should not allow them to exist.

This letter was written by Dr. D. Paul Ralph, a senior lecturer in computer science at the University of Auckland, on behalf of the software engineering class of 2020 and the current students of the course Computer Science 345: Human-Computer Interaction. The university and its respective departments do not necessarily agree with these views.

Dr. Ralph has received research funding from Google, which owns YouTube.

This post originally appeared on Noted under a CC BY-ND 3.0 license. Featured image: Christiaan Colen, Flickr CC.