05:05, May 21 57 0 theguardian.com

2018-05-21 05:05:26
Detoxifying social media would be easier than you might think

It’s great that Matt Hancock and Sajid Javid have said they will regulate the internet, but what could they actually do? The culture secretary said that when he called in representatives of 14 leading internet companies to discuss his ideas, only four turned up.

The government said it will introduce new laws to tackle “the full range of online harm”. This must mean dealing not just with serious crime, but with the harm produced every day by social media, such as abuse, bullying, racism and misogyny, with little or no protection for children. These things either aren’t quite crimes, or aren’t serious enough for the police to chase – and there are precious few concrete ideas for how to reduce them.

Social networks have few incentives to stamp them out – after all, a proliferation of controversial or aggressive comments helps drive their growth, regardless of the human impact.

The UK has struggled to find a way to regulate away the poisonous byproducts of social media. There’s much talk of treating platforms as publishers, but there’s been little follow-through as to how this would work to prevent harm.

Instead, politicians fret over whether the baffling technology can be regulated at all; where regulation stops and censorship starts; and how effective rules could ever be for multinational companies anyway. Silicon valley tech bros even muse whether regulators are smart enough to keep up with them.

This is, in part, a symptom of the government’s decade-long love affair with tech. As we have rushed headlong into the arms of the big tech companies, we haven’t kept sight of the need for regulatory tools to balance shareholder and societal interests.

For all the hand-wringing about the newfangled nature of this technology, there are important clues from our past on how to effectively regulate the tech giants. Parliament actually has a good track record for creating enduring tools to tackle corporate harms.

Back in the 1950s, the law to protect people from physical harm on other people’s property was a confusing mess, the result of decades of complex case law. The brilliant former Nuremberg prosecutor David Maxwell Fyfe, by then a Conservative lord chancellor, legislated to create a “duty of care” on people or companies that control land or property to make it as safe as reasonably possible for people on or in it.

In the 1970s, a similar tool was used to reform the byzantine and ineffective health and safety rules that had been built on a century of specific laws introduced in response to specific accidents and tragedies. In 1974, the then employment minister Michael Foot took the Health and Safety at Work etc Act through parliament, creating a duty of care on employers towards their employees.

Both pieces of legislation, with their emphasis on putting the onus on companies to prevent reasonably foreseeable harms, remain in force today, and Britain now has the safest workplaces in Europe.

Statutory duties of care work because they define a general problem to be solved, without getting caught up in the specifics of how it happened. This cuts through the complexity of case law to focus on either harm or safety.

Instead of looking backwards, trying to avoid the replication of the types of incidents that inspired lawmakers in the past, the law becomes forward-looking. Harm or safety becomes the thing the company or person on whom the duty sits has to avoid or achieve as far as is reasonably possible.

Focusing on the end point gives companies flexibility to tackle the problem without daily interference from government. The all-encompassing nature of a duty of care also makes it future-proof – it doesn’t matter what fancy new tech thing you come up with, it has to be safe or avoid doing harm.

A duty of care could be the way to clean up the daily bile on social media. We’ve already seen that technology companies respond when the law changes or starts to be properly enforced. For example, the European commission has the big tech companies working in concert to remove extremist material, and tough new laws on sex trafficking in the US have led to the closure of several sex-work websites.

Working with professor of internet law Lorna Woods, we’ve proposed legislation to create a duty of care on the largest social media companies to their users, backed up by a regulator that is funded from a fraction of the government’s forthcoming internet revenue tax.

MPs should set out a list of key harms they want to see tackled in law, for instance misogynistic abuse, or safeguarding children. Then it will come down to the regulator and ultimately the courts to decide how well the companies do at reducing these harms. The regulator would have a range of enforcement powers including enforcement notices, fines and powers of direction.

We would exclude services that already have detailed industry rules from regulation – such as the traditional media – and services with fewer than 1 million UK users. This would create a duty of care for platforms such as Facebook, but it would also maintain freedom of speech by leaving unregulated a huge range of smaller platforms.

As is currently the case with the environment, health and safety, and data regulation, companies would have to design (or redesign) their services to reduce harm. We think these rules could be delivered quickly, in a short, simple bill that could fit into the crowded Brexit legislation.

Social media can be a huge force for good, but it’s becoming ever-more imperative that we reduce its harmful effects. It needn’t be as complicated as everyone thinks.

William Perrin, a trustee of several charities working for a healthier digital environment, is a former senior civil servant who worked on regulatory policy

Topics