Featured Article: What Is The Online Safety Bill?
Featured Article: What Is The Online Safety Bill?
Following recent announcements of a toughening-up of the (draft) Online Safety Bill, we look at what the bill is, and what its implications are.
What Is The Online Safety Bill For?
The UK government’s Online Safety Bill is (draft) legislation that’s designed to place a ‘duty of care’ on internet companies which host user-generated content in order to limit the spread of illegal content on these services.
The idea of the Online Safety Bill is essentially to prevent the spread of illegal content and activity (e.g., images of child abuse, terror material, and hate crimes), as well as to protect children from harmful material, and also to protect adults from legal but harmful content.
The Bill applies to social media platforms, video-sharing platforms, search engines, and other tech services, and requires them to put in place systems and processes to remove illegal content as soon as they become aware of it. The Bill also requires these services to take additional proactive measures with regards to the most harmful ‘priority’ forms of online illegal content.
The kinds of priority offences listed in the draft bill are terrorism, child sexual abuse, and exploitation. The Department for Digital, Culture, Media and Sport’s Secretary of State also has powers to add further priority offences (with Parliament’s approval) via secondary legislation once the bill becomes law.
Other Illegal Behaviour
The Bill can also be applied to other illegal behaviour including more activities recently made illegal, which have emerged alongside the ability to target individuals or communicate en masse online.
In summary, the main groups of offences that the Bill now covers are are:
– Encouraging or assisting suicide
– Offences relating to sexual images (revenge and extreme pornography)
– Incitement to and threats of violence
– Hate crime
– Public order offences (harassment and stalking)
– Drug-related offences
– Weapons / firearms offences
– Fraud and financial crime
– Money laundering
– Controlling, causing or inciting prostitutes for gain
– Organised immigration offences
Following Criticism that the original draft Bill’s scope hadn’t gone far enough and that services/firms would only have been forced to take such content down after it had been reported to them by users, the Bill has now been strengthened (hence the quite extensive list of offences shown above). On 4 Feb this year, Digital Secretary Nadine Dorries announced that the Bill had been strengthened in the following ways:
– The addition of extra priority illegal offences; i.e. revenge porn, hate crime, fraud, the sale of illegal drugs or weapons, the promotion or facilitation of suicide, people smuggling and sexual exploitation. The naming of the new offences is designed to remove the need for them to be set out in secondary legislation later. The government says that it will also enable Ofcom (which issues the fines under the Bill) to take quicker enforcement action against tech businesses which fail to remove the named illegal content.
– The requirement for services to be proactive and prevent people being exposed in the first place rather than waiting for users to report incidents before taking the content down.
Three More New Offences Being Considered
The government is also considering the Law Commission’s recommendations for three other offences to be created and added to the Online Safety Bill, namely. cyberflashing, encouraging self-harm, and epilepsy trolling.
Back in July, the Commission recommended other new offences which the Digital Secretary has only just cofirmed will be created and legislated for in the Online Safety Bill. These are harmful and abusive emails, harmful social media posts and WhatsApp messages, as well as ‘pile-on’ harassment (where many people target an individual with abuse e.g., in a comments section). These new offences do not apply to regulated media – print and online journalism, TV, radio, and film.
One large aspect of the debate around the Online Safety Bill is the naming of specific individuals/executives in offending companies. The draft Bill, for example, already included the ability to impose criminal sanctions of named tech executives. These sanctions (i.e. prison sentences) however, were originally due to be delayed for two years (a grace period) after the laws are passed but some UK MPs have been asking the government to remove this long grace period before criminal sanctions can be faced. Digital Secretary Nadine Dorries, who has personal experience of having been targeted by trolls, is believed to favour a six months timeline (grace period) before the imposition of prison terms for those tech execs who fail to remove “harmful algorithms”.
Freedom and Legal Commentators
Freedom groups, such as the Index on Censorship and the Open Rights Groups have expressed concerns about Silicon valley companies making outsourced decisions about whether speech is harmful or not. Some legal commentators have also expressed concern that the Bill essentially allows the government to delegate all aspects of investigating and making judgements about online crimes to the tech companies/social media platforms.
The big social media platforms have expected the Bill for some time and although they have given no major reactions to the most recent announcements, they are thought to be broadly in agreement with its aims. For example, Monica Bickert, vice-president of content policy at Meta (Facebook) said recently (in the Telegraph): “While we won’t agree with all the details, we’re pleased the Online Safety Bill is moving forward.”
Other Comments and Criticism
The NSPCC recently criticised the Bill for (in an open letter to Nadine Dorries) for not doing enough to put children at its the heart. Also, the Labour Party has said that the bill needs to go further in terms of tougher sanctions on executives who breach the new laws.
Ofcom, the UK’s communications regulator will be in charge of issuing the fines for offences under the bill. For example, Ofcom will be able to issue fines of up to 10 per cent of annual worldwide turnover to non-compliant sites or block them from being accessible in the UK.
What Does This Mean For Your Business?
Tech companies, particularly social media platforms, have been forced to make changes for several years now in response to a series of trust-damaging events (e.g. Facebook’s Cambridge Analytica scandal, the platform’s use for influence in the previous US election and UK referendum), pressure from governments, and widespread concerns from users about the safety of young and vulnerable people online. The government sees the recently boosted powers of the (draft) Online Safety Bill as a way to send a much clearer message now to online services, particularly social media platforms that these issues now need to be taken more seriously, with the threat of possible prison terms for executives designed to make companies take more notice and make more changes. Facebook already appears to have started morphing into something different for the future (Meta) and, for example, Twitter’s co-founder Jack Dorsey stepped down last November. The aims of the bill appear noble in terms of the extra protections against a much wider range of offences that it may offer, but it remains to be seen how well it will work in reality when it passes into law, and whether it needs to be strengthened further.