top of page
Search

UK’s Online Safety Bill: What is it and What Does it Mean for Me?



After years of debate, deliberation, and delay, the Online Safety Bill is now official in Britain. The act, which received a Royal Assent in October 2023, aims to make the UK “the safest place in the world to be online.” While the legislation incorporates a broad range of measures, it pays particular emphasis towards protecting children.


Due to its breadth and complexity, many Britons feel perplexed about the act and its impact on their everyday web-browsing habits. This post will present the need-to-know info and explain how the bill might affect everyday internet users like you.


What is the UK’s Online Safety Bill?


The Online Safety Bill is a sweeping legislation designed to protect people - especially children - from harm online. This stringent form of internet regulation encompasses various aspects, from content moderation to online harassment.


The bill introduces new obligations regarding how big tech companies must design, operate, and moderate their services. These rules intend to shield users from online harm, including:


· Underage access to adult material

· Scam advertisements

· The non-consensual sharing of intimate images, including deep fakes

· Child abuse material

· Terrorism-related content


According to UK Home Secretary Suella Braverman, “The Online Safety Act’s strongest protections are for children. Social media companies will be held to account for the appalling scale of child sexual abuse occurring on their platforms, and our children will be safer.”


The British Office of Communications, Ofcom, monitors compliance and enforces the Online Safety Bill.


When did the UK Online Safety Bill come into law?


The British Parliament passed the Online Safety Bill on September 19, 2023. A bit over a month later, on October 26, 2023, the bill received Royal Assent, formally establishing it as the first Online Safety Act (OSA) in the UK.

Getting there was no easy feat. Since digital minister Jeremy Wright proposed the bill in a 2019 Online Harms White Paper, the UK has sworn in four prime ministers. Events such as the death of teenager Molly Russell and the Cambridge Analytica scandal saw the bill evolve and become a priority in the public eye. Conversely, the COVID-19 pandemic and Brexit saw the bill take a temporary backseat.


How large is Britain’s Online Safety Bill?


The Online Safety Bill is a mind-bogglingly big document which has ballooned since its inception in 2019. Provisions were included as new prime and digital ministers took office, while other ideas were scrapped once they became too controversial.


The first version of the bill presented to parliament in 2021 spanned 145 pages. Upon being formalised in 2023, the legislation had almost doubled to 262 pages.


Several fresh new provisions were added shortly before the Royal Assent, including:


· Criminalising cyber-flashing (sharing explicit, unsolicited pictures through social media or dating apps)

· Obligating online services to allow users to verify identities to tackle harassment from anonymous trolls

· Incorporating mandatory age checks for pornographic websites


What are websites and social media platforms expected to do?


The Online Safety Bill obligates social media and other websites to adopt a duty of care to protect British children. Digital platforms must take the following measures:


· Rapidly remove illegal content, including the encouragement of self-harm, and take steps to prevent it from being posted

· Prevent children from accessing age-inappropriate and harmful content

· Enforce age limits for age-restricted content, such as pornography

· Make the risks posed to children on large social media platforms more transparent

· Provide simple ways to report online issues as they arise

The act also empowers adults to control the content they view online by forcing social media platforms to:

· Comply with the promises made in their terms and conditions

· Allow users to filter out harmful content they don’t wish to see, such as online bullying and harassment

· Moderate illegal content, such as dangerous misinformation, death threats, revenge porn, people-smuggling advertisements, and encouraging others to commit suicide

The bill will force social media platforms to block fake or scam advertisements, which have become a source of cybercrime in the UK. Furthermore, these organisations must take measures to prevent users from publishing animal cruelty content, even when the activity takes place outside Britain.


When will the Online Safety Bill come into effect?


Although the bill is now law in the UK, online platforms don’t need to adopt all these measures immediately. Ofcom, the national communications regulator, is incorporating a staggered, three-phase roadmap to give tech companies sufficient time to adapt.


The first phase covers illegal content, such as child abuse and terrorism-related material. Phase two regards child safety, pornography, and protecting women and girls, while phase three covers “additional duties for categorised services,” such as transparency reports and empowerment measures.


The multi-stage rollout will occur over four years, from 2023 to 2026. Ofcom began a consultation process to tackle illegal content and children’s safety issues within weeks of the Royal Assent.


Some social media giants have already started taking action. Snapchat, for example, has begun purging underage user accounts, while TikTok is bringing in more robust age verification measures.


What are the penalties for non-compliance?


The penalties for non-compliance are severe. Platforms found breaching their duty of care face a fine of up to £18 million or 10 percent of their global annual revenue, whichever is larger. Therefore, fines given to prominent social media platforms, such as Meta, could reach billions of pounds. For severe violations, Ofcom may block online platforms in the UK, and executives could be sent to jail.


Ofcom has the authority to monitor violations and enforce penalties.


How will the UK Online Safety Bill be enforced?


Ofcom can now block access to search engines and user-to-user services in the UK. The regulator may make these interventions through internet access providers or app stores. Most violations, however, will result in warnings and fines.


According to Ofcom head Dame Melanie Dawes, the regulator won’t act as a “censor” that takes down content. Instead, new standards will make social media platforms safer by design - the onus will be on them to remove illegal content.


UK Online Safety Bill controversies


Few would argue against the intent of Britain’s newly-established Online Safety Act. However, proponents say implementing the legislation puts an excessive burden on digital organisations and social media platforms.


WhatsApp, for example, took issue with how the bill forces online tech companies to identify child abuse content “whether communicated publicly or privately.” As the popular instant messaging app uses end-to-end encryption, not even the company itself can view private messages. Identifying illegal content would require removing end-to-end encryption, thus undermining user privacy and nullifying the technology. Rival instant messaging app Signal expressed similar concerns.


Google has also flagged issues. The tech giant says it is difficult to differentiate between illegal and legal content on a large scale. Therefore, the act may require them to deindex a significant amount of helpful, fully legal content to ensure legal compliance.


Meta believes forcing users to verify their identity will exclude posters who wish to remain anonymous for privacy reasons. Moreover, Wikimedia opposes measures obligating the site to collect personal data, including age verification. Spokeswoman and internet freedom advocate Rebecca MacKinnon has stated, “The Wikimedia Foundation will not be verifying the age of UK readers or contributors,” a stance which potentially exposes the organisation to sanctions.


How does the Online Safety Bill affect me?


If everything goes to plan, the bill will make the internet safer for all Britons. Children will be exposed to less harmful, age-inappropriate content, while adults will get more control over the material they view online. Scam advertisements will dwindle, and social media platforms will scrub illegal content.


Intimate images, including deep fakes, will less frequently be shared without consent, as the bill makes it significantly easier to prosecute offenders. Guilty parties face at least six months in custody for a base offence.


Of course, things don’t always go to plan. The Online Safety Bill is an ambitious legislation, a broad-spanning policy shift that forces tech companies to incorporate expensive and potentially unfeasible changes.


Some organisations, such as Wikimedia and WhatsApp, claim the act’s restrictive nature makes their services unviable in Britain and have threatened to abandon the local market. Should these companies cease operating in the UK, users will have to switch to an alternative or go without.


As Ofcom is implementing the Online Safety Bill over a lengthy three-phase roadmap, how the act actually affects the typical British internet user remains to be seen.


Summary


The Online Safety Bill is a broad legislation that could significantly alter the internet in Britain. While supporters say the act will improve online safety in the UK, detractors believe overburdensome obligations will force popular tech companies to restrict services or shut up shop.


With Ofcom gradually rolling out the legislation over several years, the Online Safety Act’s true impact is yet to be seen. How the bill affects everyday internet users may depend on how Ofcom chooses to enforce its new powers.



Article by; Harry Stewart

Bình luận


bottom of page