If you’ve paid any attention to the news over the last couple of days, you may have seen a fair amount of controversy surrounding changes made on 29 November to the UK’s Online Safety Bill.
The Bill
So what is the Bill and what does it entail? It began its life in 2019 as the Online Harms White Paper and was introduced as a bill to Parliament earlier this year by previous Culture Secretary, Nadine Dorries. The Bill aims to provide a legislative blue print for countries around the world and make the UK the safest place to go online.
So far, so good you might be thinking. However, the controversy lies around some of the measures that the Bill seeks to introduce. Some say will inhibit freedom of speech by over-policing content that might in fact be balanced, of artistic value or of democratic importance.
Why do we need it?
Due to the notoriously slow manner in which the law develops, compared with the light-speed of technological development, there is arguably a tech-law lag that needs to be brought in line to reflect the impact that technology now has on our day to day lives.
Currently most user-user and search engine services in the UK are not subject to any kind of regulation and this perhaps is in part, due to a previously held belief that responsibility for online safety fell firmly within the realm of parenting, or at most, self-regulation by tech companies themselves.
Supporters of the Bill argue that behaviour or content which is deemed illegal in real life should be treated the same way if and when it occurs online, and that the fact that the cyber world seems to operate in a legislative vacuum in this regard has gone on for too long.
Background
The online dangers that the Bill aims to address in relation to children are best illustrated by the tragic case of Molly Russell, a 14 year old girl who died in 2017. The Inquest into her death heard that after seeking one or two images relating to self-harm and suicide, algorithms on Instagram and Pinterest bombarded her with similar content, content which would be deemed legal but which of course, when consumed en masse, has the potential to be extremely harmful. In somewhat of a landmark ruling, the coroner stated that Molly had died, not from suicide but from “an act of self-harm while suffering from depression and the negative effects of online content”.
The main issues
‘Legal but harmful’ concept axed
This brings us on to the most recent development of 29 November 2022 whereby a section of the Bill, which previously incentivised social media firms to remove content falling into the category of ‘legal but harmful’, has been axed.
In its absence, the current Culture Secretary, Michelle Donelan has placed what she calls a ‘triple shield’ of protection whereby social media firms will be legally required to:
1) Remove illegal content
2) Take down material which breaches their own terms of service, and
3) Provide adults with a greater choice over the content they see and engage with.
Age verification
Ofcom, the newly appointed regulator for online safety, reports that one third of children currently have access to adult content online, largely because in order to register for most social media accounts, users simply need to enter their date of birth. Going forwards ,if the Bill passes, tech companies will need to demonstrate that their age verification processes can ensure that users are the age they say they are. Some sites have had some success in using artificial intelligence software which can estimate your age based on a selfie or even a voice recording. Clearly this presents a challenge and there is a balance to be struck in relation to privacy laws.
There is an argument that tech companies may be forced to choose between either precluding children from accessing their sites entirely, or ensuring that their site is sanitised to a level appropriate for their youngest user.
Algorithms
The story of Molly Russell demonstrates so clearly the issue these platforms present with their use of algorithms. The advantage for big tech is that the more we see of what we like, the longer we stay online and the easier we are to sell to. However, the model has the potential to be hugely problematic by creating an online echo chamber, unhealthy and artificial even at the best of times but incredibly damaging where the content relates to self-harm, suicide, anorexia, misogynist views or radicalism.
Vulnerable adults
There is a faction who feel that the changes made on Tuesday will have a detrimental effect on vulnerable adults who will now fall outside of the scope of protection. Adults will now still be able to post and view anything legal (even if potentially harmful) provided that it doesn’t violate the platform’s terms of service. Others feel that this concession in relation to the ‘legal but harmful’ issue was necessary to ensure that the Bill can actually pass before the end of this session of parliament. If it doesn’t, law-makers could be going back to the drawing board, which given that this Bill has taken four years to get this far, is likely not in the interested parties’ best interests.
Conclusion
The impact of the Bill on the tech industry is still hard to predict, much will be revealed when Ofcom release their Code of Practice. If technology is your industry and you’re currently worried about what the Bill could mean for you, rest assured that Ofcom’s expectation is not that tech companies should or could, eradicate harmful or illegal content entirely. This would sadly be impossible. What they will be concerned about is the adequacy of the systems in place, particularly relating to age verification and use of algorithms to protect users.
The content of this article is for general information only. It is not, and should not be taken as, legal advice. If you require any further information in relation to this article please contact the author in the first instance. Law covered as at December 2022.