By Sarah-Jane O'Connor 19/03/2019 2


An 18-year-old has appeared in court on charges relating to sharing a video of a terrorist attack last week, but where is the accountability for the social media platforms that enabled it to be shared in the first place?

Much of Friday afternoon was surreal: we went from celebrating the action of thousands of New Zealand teenagers protesting about the lack of climate change action, to sitting in stunned horror as news update after news update arrived from Christchurch, where 50 people were killed in an attack on two mosques.

It added insult to injury to find out the alleged gunman had apparently live-streamed the shooting at Masjid Al-Noor – on Riccarton’s Deans Ave – on Facebook, with the video appearing on YouTube and other social media shortly after the 17-minute video cut out.

New Zealand’s Chief Censor has officially classified the video as “objectionable”, which means it is banned. Sharing the video could lead to a fine of up to $10,000 or 14 years’ jail time. The classification is retroactive, which means anyone who shared the video on Friday or over the weekend could be charged, alongside the one man who has already been charged.

I found the video early on Friday afternoon, before media had begun to report on its existence. I felt dazed and befuddled: how could a video be available so quickly? It surely must have been a joke, or an old video – I saw enough to confirm it was the Masjid Al-Noor, which I had lived around the corner from for two years, and realised the horror of the extent of the killings. I reported the video to YouTube and it was removed shortly after.

Except it wasn’t really removed: it was reposted again and again and again. Colleagues had the video appear in their Twitter feeds that night. Schoolchildren in lockdown apparently saw it on their phones while they were trying to find out more information about what was happening in their city. Victims’ families saw it and identified their loved ones who had been killed.

Facebook’s response to-date has been lacklustre. Yes, it removed 1.5 million versions of the video in the first 24 hours – 1.2 million of which were blocked before upload – but the social media giant has been silent on any further action to prevent such an act happening again on its platform. YouTube has declined to state how many videos it removed, but The Washington Post has detailed the company’s struggle to get ahead of the people uploading the video.

This isn’t the first time the social media platform has been used to distribute such filth. In 2017, four people were charged after they kidnapped and tortured a man with disabilities: one of the perpetrators live-streamed the events on Facebook. Three were sentenced to between three and eight years in prison, while a fourth was sentenced to probation and community service.

It seems nonsensical that anyone can stream live with no moderation or pre-registration. A tool that can be useful for accessing media reports and citizen reporting has been utterly perverted by someone in pursuit of notoriety and maximum impact. The pain caused to victims’ families and friends who viewed the video, and the children who stumbled upon the stream, is unfathomable.

Associate Professor David Parry, head of the computer science department at AUT, said one way to approach the issue could be for Facebook to up the number of moderators and introduce a time delay before videos can go live. But because Facebook – along with other social media companies such as YouTube or Instagram – depends on this user-generated content, “and because the market is very easy to enter”, he thought the companies would be reluctant to censor materials or delay the transmission.

It’s sad to think Dr Parry is probably right. Given the battle for audience (and, therefore, advertising) between Facebook, YouTube and Twitter (the latter absorbing live-streaming app Periscope in 2016), I can’t imagine them wanting to give up a single inch of the audience they’ve claimed in the platform’s inexhaustible growth.

All this has had me thinking about my complicity in continuing to use Facebook. I hated it already, I despise it now, and yet so many workplaces are inextricably linked to Facebook, such has been its power of infiltration.

When I hit publish on this post, it will automatically publish to Facebook and Twitter. Any time we publish something about 1080, vaccination, genetic engineering or any other number of topics I sweat about what the Facebook comments will be like. We can moderate strongly on our own websites – Sciblogs has a comments policy and we don’t allow comments on the Science Media Centre’s website – but our hands are tied when it comes to Facebook. I can hide, delete and block, but I can’t do any form of pre-moderation before comments appear. Some mornings I wake up and find a horrible comment posted just after I went to bed and I feel helpless.

It’s taken until now, 2019, for Facebook to finally take action against the proliferation of anti-vaccination sentiment running rampant among its community groups. Swinburn University of Technology senior lecturer Dr Belinda Barnet says Facebook and Twitter have done a good job at getting rid of ISIS and other extremist content, but “I don’t feel they’ve paid as much attention to right-wing extremism, and in many cases have promoted it”.

So long as Facebook continues to allow itself to be used to stoke fears, hatred and misinformation, it is complicit. As Prime Minister Jacinda Ardern said about social media in Parliament on Tuesday afternoon: “they are the publisher, not just the postman”. Facebook must take responsibility and this week, its silence is deafening.


2 Responses to “Where is the social responsibility among social media companies?”

  • Some relevant quotes from Taplin, Jonathan. Move Fast and Break Things: how Facebook, Google and Amazon have cornered culture and what it means for all of us. Pan Macmillan UK. Kindle Edition.
    ” Imagine if you were a terrorist organization in 1990 wanting to get a propaganda video you made viewed by two million people. The task would be almost impossible. As Osama bin Laden did in the early days, you might try to distribute videos via street vendors. You certainly couldn’t get access to any television network, either in the West or in the Islamic world. But today ISIS can make a video, post it for free on YouTube, and get two million views in a week — especially if it involves something horrific like a beheading.
    ——

    “YouTube claims it has no control over what users post on its platform, but this not true. You will notice that there is no porn on the platform. YouTube has very sophisticated content ID tools that screen for porn before it can ever be posted. These same tools could be used to screen out posts from ISIS before they appear. But YouTube does not want to change the status quo because its advertising-based business model relies on having the maximum number of users and the maximum number of posts. YouTube has even gone so far as to place ads on ISIS videos.”