What the Online Safety Bill does
The Online Safety Bill delivers the government’s manifesto commitment to make the UK the safest place in the world to be online while defending free expression. The Bill has been strengthened and clarified since it was published in draft in May 2021, and reflects the outcome of extensive Parliamentary scrutiny.
Key points the Bill covers
The Bill introduces new rules for firms which host user-generated content, i.e. those which allow users to post their own content online or interact with each other, and for search engines, which will have tailored duties focussed on minimising the presentation of harmful search results to users.
Those platforms which fail to protect people will need to answer to the regulator, and could face fines of up to ten per cent of their revenues or, in the most serious cases, being blocked.
All platforms in scope will need to tackle and remove illegal material online, particularly material relating to terrorism and child sexual exploitation and abuse.
Platforms likely to be accessed by children will also have a duty to protect young people using their services from legal but harmful material such as self-harm or eating disorder content. Additionally, providers who publish or place pornographic content on their services will be required to prevent children from accessing that content.
The largest, highest-risk platforms will have to address named categories of legal but harmful material accessed by adults, likely to include issues such as abuse, harassment, or exposure to content encouraging self-harm or eating disorders. They will need to make clear in their terms and conditions what is and is not acceptable on their site, and enforce this.
These services will also have a duty to bring in user empowerment tools, giving adult users more control over whom they interact with and the legal content they see, as well as the option to verify their identity.
Freedom of expression will be protected because these laws are not about imposing excessive regulation or state removal of content, but ensuring that companies have the systems and processes in place to ensure users’ safety. Proportionate measures will avoid unnecessary burdens on small and low-risk businesses.
Finally, the largest platforms will need to put in place proportionate systems and processes to prevent fraudulent adverts being published or hosted on their service. This will tackle the harmful scam advertisements which can have a devastating effect on their victims.
What the Bill means for users
Our new online safety laws will make the internet a safer place for everyone in the UK, especially children, while making sure that everyone can enjoy their right to freedom of expression online.
Protecting children:
For children, these new laws will mean that all in-scope companies must assess risks and take action to tackle illegal activity that threatens the safety of children.
In addition, platforms likely to be accessed by children will need to:
prevent access to material that is harmful for children, such as pornography.
ensure there are strong protections from activity which is harmful to children, which we expect will include harms such as bullying.
If a child does encounter harmful content or activity, parents and children will be able to report it easily. Platforms will be required to take appropriate action in response.
Platforms will also have a duty to report any child sexual exploitation and abuse content that they encounter to the National Crime Agency, to assist with law enforcement efforts to stamp out this appalling crime.
Support for adults:
All in-scope platforms will need to tackle the presence of illegal material on their sites. If found, it will be easy to report it to the company, who will have to act quickly and take it down.
Major service providers will also need to make clear in their terms of service what legal content is acceptable on their sites, and provide user-friendly ways to complain when things go wrong. The categories of content that companies’ terms of service will need to address will be set out in secondary legislation and approved by Parliament.
On the largest sites, adults will have more control over who they interact with online, and the types of harmful content that they can see. This could, for example, mean that on a platform which allows self-harm content, individuals who feel that this content would be damaging to their mental health could choose not to be presented with it.
We are not requiring companies to remove legal content. Adults will still be able to access and post legal content that some may find offensive or upsetting if companies allow that on their services.
Adults will be able to make informed decisions about the online services they use, and be able to trust the platforms will keep the promises they make.
What’s changed since the draft Bill
Since the draft Bill was published, we have made a large number of changes to strengthen and refine the legislation. Some of the most significant of these changes are that we have:
Introduced a new standalone duty in the Bill requiring Category 1 and Category 2A services to take action to minimise the likelihood of fraudulent adverts being published on their service. This will make it harder for fraudsters to advertise scams online, and if Category 1 and Category 2A services fail to take adequate action they could face enforcement action.
Included priority offences on the face of the primary legislation. This means Ofcom can take faster enforcement action against tech firms which fail to remove the named illegal content, rather than waiting for the offences to be made a priority in secondary legislation.
Added further measures to tackle anonymous abuse by requiring Category 1 services to ensure adult users are given the option to verify their identity, and tools to have more control over the legal content that they see and who they interact with — this would include providing adults with the option not to interact with unverified users.
Accepted the Law Commission’s recommendations on harm-based, false and threatening communications. These new offences will help ensure that the criminal law is focused on the most harmful behaviour whilst protecting freedom of expression, by ensuring that communications that are intended to contribute to the public interest are not prosecuted (the harms-based offence will also cover behaviour such as sending flashing images to epilepsy sufferers).
Added a provision to the Bill to require all service providers that publish or display pornographic content on their services to prevent children from accessing this content. All services included in this provision will be subject to the same enforcement measures as other services in the Bill.
Amended the legislation to no longer defer the power to bring in criminal sanctions for failures to comply with information notices. This will instead be introduced as soon as possible after Royal Assent, ensuring that online safety becomes, and remains, a key topic of discussion in boardroom conversations amongst senior executives.
Amended the definition of harmful content accessed by adults so that all categories of such content will be voted on by Parliament, ensuring that platforms cannot be incentivised to over-remove legal material due to taking a wider interpretation of harm than we intend.
Added provisions for Ofcom to recommend the use of tools for content moderation, user profiling and behaviour identification and attached strong safeguards to ensure these are used only where necessary and where it is proportionate to the harms posed. We have made sure the use of these tools will be transparent to users, and will be applied appropriately in relation to illegal content and content that is harmful to children — it won’t apply to private messaging.
Companies our new laws will affect
The laws will apply to companies whose services host user-generated content such as images, videos and comments, or which allow UK users to talk with other people online through messaging, comments and forums. This includes:
the biggest and most popular social media platforms
sites such as forums and messaging apps, some online games, cloud storage and the most popular pornography sites
search engines, which play a significant role in enabling users to access harmful content
Sites which publish pornographic content will also be required under the legislation to ensure that children cannot access age-inappropriate material.
The regulator will have the powers necessary to take appropriate action against all companies in scope, no matter where they are based. This is essential given the global nature of the internet. Some services with user-generated content will be exempt from the new framework, including news websites, some retail services, some services used internally by businesses and email services.
What companies need to do, and harms that are in scope
All companies in scope will need to tackle illegal content on their services. They will also need to assess whether their site is likely to be accessed by children, and if so, protect children from harmful and inappropriate content such as that showing pornography or violence. The regulator will have additional powers to ensure companies take particularly robust action to tackle terrorist activity and child sexual abuse and exploitation online.
A small number of the biggest and highest-risk platforms (the threshold for which will be set by the DCMS Secretary of State, in consultation with Ofcom) will also have to set out in their terms and conditions what types of legal content adults can post on their sites. The legislation will not ban any particular types of legal content, but will ensure that terms and conditions are comprehensive, clear and accessible to all users. Adults will be able to make informed decisions about whether to use a platform based on the material they may see on the site. These companies will need to transparently enforce their terms and conditions. These platforms will also be required to offer adult users the option to verify their identity, as well as tools to control who they interact with and the content they see online.
The requirements will be proportionate, reflecting the different size, resources and risk level of the companies in scope. Our new laws will set expectations on how companies respond to complaints, to raise the bar for their responses. All companies will need to have clear and accessible ways for users, including children, to report harmful content or challenge wrongful takedown.
Who oversees and enforces the framework
Ofcom, the communications regulator, will be appointed as the regulator for the Online Safety regime. Ofcom will:
have a range of powers to gather the information it needs to support its oversight and enforcement activity
be able to make companies change their behaviour, by taking measures to improve compliance, including to use proactive technologies to identify illegal content and ensure children aren’t encountering harmful material
be able to take tough enforcement action against companies that fail to comply — if companies don’t meet their responsibilities, Ofcom will be able to require them to put things right, impose fines of up to £18m or 10% of global annual turnover (whichever is higher) or apply to court for business disruption measures (including blocking non-compliant services)
help companies to comply with the new laws by publishing codes of practice, setting out the steps a company should take to comply with their new duties — companies will either need to follow these steps or to show that their approach is equally effective (we expect Ofcom to work collaboratively with companies to help them understand their new obligations and what steps they need to take to protect their users from harm)
be able to bring criminal sanctions against senior managers who fail to ensure their company complies with Ofcom’s information requests, or who deliberately destroy or withhold information, should companies fail to take the new rules seriously
How the new laws tackle misinformation and disinformation
The duty of care will require platforms to have robust and proportionate measures to deal with harms that could cause significant physical or psychological harm to children, such as misinformation and disinformation about vaccines. Platforms will also need to address in their terms of service how they will treat named categories of content which are harmful to adults, likely to include disinformation. This will mean:
all companies will need to remove illegal disinformation, for example where this contains direct incitement to violence
services accessed by children will need to protect underage users from harmful disinformation
services with the largest audiences and a range of high risk features (Category 1 services) will be required to set out clear policies on harmful disinformation accessed by adults
The regulatory framework will also include additional measures to address disinformation, including provisions to boost audience resilience through empowering users with the critical thinking skills they need to spot online falsehoods, giving Ofcom the tools it needs to understand how effectively false information is being addressed through transparency reports, and supporting research on misinformation and disinformation.
How the Bill will protect your freedom of speech online
Our approach will safeguard freedom of expression and pluralism online, protecting people’s rights to participate in society and engage in robust debate online. These laws are not about imposing excessive regulation or state removal of content, but ensuring that companies have the systems and processes in place to ensure users’ safety. Safeguards include:
Platforms will not be required to prevent adults from accessing or posting legal content, nor to remove specific pieces of legal content. We recognise that adults have the right to access content that some might find offensive and upsetting.
Both Ofcom and in-scope companies will have duties relating to freedom of expression. In-scope companies will have a legal obligation to have regard to the importance of freedom of expression when fulfilling their duties, for which they can be held to account.
The largest social media platforms will no longer be able to arbitrarily remove harmful content. They will need to be clear what content is acceptable on their services and enforce these rules consistently. They will also need to put in place additional protections for democratic and journalistic content. Users will have access to effective mechanisms to appeal content that is removed without good reason.
Private messaging platforms
Platforms will need to take measures to protect their users on private channels. Further details will be set out by the regulator, Ofcom, but they could include making these channels safer by design. Both Ofcom and in-scope companies will have to take steps to protect users’ privacy when taking these measures.
As a last resort, Ofcom will be able to require a platform to use highly accurate technology to scan public and private channels for child sexual abuse material. The use of this power will be subject to strict safeguards to protect users’ privacy. Highly accurate automated tools will ensure that legal content is not affected. To use this power, Ofcom must be certain that no other measures would be similarly effective and there is evidence of a widespread problem on a service.
Ofcom will also be able to require companies to scan public channels for terrorist content, subject to the same strict safeguards.
What the Bill says about protections for democracy
The internet has revolutionised our ability to connect with each other and express our views widely. However, the majority of online speech is now facilitated by a small number of private companies, with significant influence over what content appears online.
Regulation will therefore include protections to:
safeguard pluralism and ensure internet users can continue to engage in robust debate online — we will require the largest services with the highest-risk functionalities (Category 1 services) to put in place clear policies to protect content of democratic importance, and to enforce this consistently across all content moderation
ensure platforms don’t discriminate against particular political viewpoints — it is essential that the legislative measures uphold and protect freedom of expression online, so that people can express their views on issues important to democracy
consider how to tackle wider harms to democracy caused by false information (the legislation will establish an expert advisory committee on mis- and disinformation which could include consideration of this — Ofcom will also be able to establish bespoke advisory committees to build understanding of emerging societal harms that would not be addressed by the duty of care alone)
What the Bill says about safeguards for journalism
A free press is one of the pillars of our democratic society. The new legislation has been designed to safeguard access to journalistic content. News publishers’ websites are not in scope of online safety regulation. Below-the-line comments on news publishers’ own sites are also exempt, as there is an explicit exemption in the legislation for comments on content published directly by a service provider.
The legislation also contains safeguards for news publisher content and wider journalistic content when it is shared on in-scope social media platforms.
First, news publishers’ content will be exempted from platforms’ new online safety duties. Tech companies will be under no legal obligation to apply their new safety duties to it. This means platforms will not be incentivised to remove news publishers’ content as a result of a fear of sanction from Ofcom.
The criteria against which an organisation qualifies as a publisher is set in the Bill. If an organisation meets these criteria, then its content will be exempt.
All news publishers’ content will be exempt, not just their journalistic content. The exemption applies when the content is posted by the publisher, and also when it is posted by a different user, as long as the user reproduces or links to the original content in full.
If disputes arise because Ofcom judges that some content is in-scope and that platforms’ have not fulfilled their safety duties for it, but other parties believe that the same content is exempt under the news’ publishers exemption, there will be routes for appeal. The framework has provision so that decisions of the regulator can be appealed to an external judicial body.
Secondly, legislation will also impose a duty on Category 1 companies to safeguard all journalistic content shared on their platform (including news publishers’ journalistic content):
Through this duty, these platforms will need to have systems in place to ensure they take into account the importance of the free expression of journalistic content when operating their services. This means they will have to create a policy which counterbalances the importance of giving journalism free expression against other objectives which might otherwise lead to it being moderated, and implement this policy consistently.
Among other measures, companies will need to create expedited routes of appeal for journalists, so that they can submit appeals if their content is moderated.
This duty will apply to all content that is created for the purpose of journalism and which is UK-linked. This includes citizen journalists’ journalistic content, as well as news publishers’ journalistic content.
Platforms will need to set out how they identify such journalistic content in their terms of service. Ofcom’s codes of practice will provide further guidance on how platforms should achieve this.
What the Bill says about online pornography
Where pornography sites host user-generated content or facilitate online user interactions (including video and image sharing, commenting and live streaming), they will be subject to the duty of care. Platforms which publish pornographic content which is not user-generated will also have a duty to prevent children from accessing that material.
We expect companies whose sites pose the highest risk of harm to children to use robust measures to prevent children from accessing their services, such as age verification. Companies would need to put these technologies in place, or demonstrate that their approach delivers the same level of protection. Companies failing to do this could face enforcement action.
What the Bill says about online racist abuse and anonymity
Under their duty of care to users, tech companies will have to tackle racist abuse on their platforms, regardless of whether it is anonymous or not. They could face severe fines if they fail to act.
The major platforms will also need to set out what legal but harmful content is allowed on their sites and enforce their terms and conditions consistently and transparently.
Platforms will need to have appropriate systems and processes in place to stop criminals using their services to spread hate, and will need to respond quickly if someone posts racist content, whether words, images, emojis or videos. Companies which fail in this duty of care could face huge fines - up to 10% global turnover, which for the major social media platforms will be billions of pounds.
Ofcom, the new regulator, will set out what companies must do in codes of practice.
The police already have powers to identify criminals who hide behind anonymous profiles. The Bill will force social media companies to stop anonymous users spreading hate.
In addition, major platforms will be required to provide users with tools to tailor their experiences and give them more control over who they interact with and what content they see. This will strengthen the protections against anonymous online abuse.
The government’s new Safety by Design guidance is considering steps that companies can take to mitigate harms arising from anonymous accounts. Banning, or restricting, anonymity brings with it serious consequences for freedom of expression, especially for those who need to protect their identities for legitimate reasons, including those in LGBT+ communities, whistleblowers and survivors of domestic violence.
What the Bill does to protect women
As with all abuse, this Bill will protect women and girls online in five key ways:
Illegal content: All platforms in scope of the Bill will need to proactively remove priority illegal content. This includes hate crime, along with other offences that protect women and girls, such as offences relating to sexual images i.e revenge and extreme pornography, and harassment and cyberstalking legislation. Beyond the priority offences, all platforms will need to ensure that they have effective systems and processes in place to quickly take down other illegal content directed at women and girls once it has been reported or they become aware of its presence. This includes cyberflashing, which will be made a criminal offence through the Bill. This will help stop criminals using the internet as a weapon with which to threaten and commit violence against women.
Children: Companies that are likely to be accessed by children will also have to protect under-18s from abuse that does not reach a criminal threshold.
Legal but harmful: The big social media companies will need to keep their promises to users by taking action against harmful content that is prohibited under their terms of service. We will set out in secondary legislation a number of priority categories for “legal but harmful” content — which is highly likely to capture misogynistic abuse.
User empowerment: Women will have more decision-making over who can communicate with them and what kind of content they see on major platforms. This will strengthen the protections against anonymous online abuse.
User redress: Women will be better able to report abuse, and should expect to receive an appropriate response from the platform.
If major platforms don’t fulfil their own standards to keep people safe and address abuse quickly and effectively, they’ll face the consequences — including huge fines, and potentially having their sites blocked in the UK.
Comments