April 9, 2019

The government’s white paper on online harms could reign in the excesses of big tech, while placing a burden of IT compliance and cost on businesses of all sizes


Government proposals for new online safety laws have finally arrived.

Ideas for better protecting children from harmful content and stemming the flow of fake news have been mooted for some time in Westminster, however on Monday the government published its Online Harms White Paper for public consultation.

In it are proposals to place legal liability for protecting end-users firmly on online platforms like Facebook, Google, and Snapchat.

To make that stick, The Department for Digital, Culture, Media and Sport has proposed an independent agency like Ofcom write a code of practice for technology companies. The new regulator will have power to penalise companies with stiff fines or even block them if they fail to protect users from a host of online ills, including allowing the dissemination of child sexual abuse material, extremist content, revenge pornography, intimidation, hate crimes, violent content – or ‘disinformation’.

Senior managers could be held liable for breaches of the code.

While few would argue that governments shouldn’t try and stop or control harmful content, the proposals have sparked some controversy. Some of the online harms listed are already illegal, others, such as ‘disinformation’ and ‘intimidation’, are not so clearly defined – or even prohibited in law.

Child safety advocates say the time for action is long overdue, while proponents of free expression worry about a slippery slope. For businesses the issue of legal liability for content needs to be better understood.


Why organisations of all sizes need to be concerned


Online harms legislation is clearly targeting big-tech and Facebook in particular. From the rise of online predators to potential interference in elections, social media has evolved some very unpleasant and even dangerous behaviours and content types.

The algorithms that automatically curate social networks like Instagram, YouTube, and Facebook are written to show visitors more of what they like and share, in order to personalise the experience. The intention is to show people who, for example, like Beyoncé videos more videos in the same vein, or show people who like to share cooking recipes, more cooking recipes.

Sadly they don’t yet seem able to distinguish benign content from harmful, or recognise signals that indicate danger. If someone follows a hashtag related to depression or self-harm, they may find their feed full of recommendations for more of the same, creating a morbid thematic environment that reinforces negativity.

All of this has created a new set of risks, social ills, and opportunities for criminality that governments have a duty to police. Alongside that are security concerns. We’ve written before about social media as vectors for phishing campaigns, and Facebook seems determined to prove that it is terrible at securing its end users.

But there are already questions about the duty of care Westminster wants to impose on private companies. The proposals would apply to more than just Facebook and Twitter, covering ‘any company that allows users to share or discover user generated content or interact with each other online’, according to a government statement.

Does that include sites where files are hosted, messaging services, search engines, the comments left on newspaper articles? At the moment we have to assume the definition could be that broad.


Changing how we think about end users


The governments proposals would give the regulator the power to force tech companies to publish annual reports detailing the amount of harmful content on their platforms, and how it is being addressed. It would also be able to compel companies to respond to users’ complaints and act to address them quickly; while setting codes of practice that companies and managers would be legally obligated to follow.

That could include proving that they have taken concrete steps to minimise the spread of misleading information, for example by hiring fact checkers to vet content when elections are underway.

The potential applicability of these measures beyond big tech is where organisations need to keep a close eye on developments. The concern goes beyond worries that emerging business models and tech startups will be stifled under the legislation.

All business is now digital. The steady expansion of AI and machine learning in business environments could raise questions in the future about ‘foreseeability’ where liability for protecting end users of online services is concerned.

Meanwhile a myriad of professional and personal information already traverses company networks, or sits on personal devices connected to those networks. The potential for a business to become an unwitting participant in ‘online harms’ is already very real.

This issue isn’t new. Facebook and other tech firms have devoted massive technical resources to the issue in recent years but still seem unsure how to capture harmful content reliably.

The imposition of legal liability will force them to shift R&D budgets further to meet that challenge – but companies without IT budgets in the billions of dollars, or a team of 30,000+dedicated to protecting their internet users, may be at real risk of getting caught up in a digital dragnet.

Without doubt, the shift towards cracking down on harmful content can only be a good thing – but, to what extent that responsibility will be undertaken by the large tech companies, remains to be seen.  Arguably they already have the resources and awareness they need to make positive changes without the need for legislation, which leaves the sceptics thinking that this might just create further compliance for smaller organisations – who are, in most case, already doing their bit.

Share this: