The Long Awaited Online Safety Bill Arrives
EC asks for input on European Cyber Resilience Act; Another U.S. state may be moving data privacy legislation
This is the free edition of the Wavelength; subscribe to get this content four times a week.
The Wavelength is moving! Next week, you’ll get the same great content, but from the Ghost platform. The move will allow for a price decrease because of reduced costs for me.
And, it bears mention that content on technology policy, politics, and law that preceded the Wavelength can be found on my blog.
Photo by Lucas Davies on Unsplash
The British government has released the “Online Safety Bill” (Bill 285) after extensive review of a draft bill issued last year. Successive governments in London have aspired to enact a framework to address online harms, and the legislation introduced in the House of Commons today reflects the current government’s views on how to best do that. Over the last six weeks, the agency charged with developing the bill has dribbled out details about the revised legislation, notably the new criminal offences violators will face for practices like cyberflashing and advertising scams. Online platforms like Facebook, Google, TikTok, and others will need to comply with a host of new duties, much of which will turn on what constitutes harm and legitimate activity.
However, the revised bill still allows a British regulator to fine online platforms “up to £18 million or 10% of qualifying worldwide revenue, whichever is greater” and seek criminal penalties against senior company officials. Additionally, London is proposing to fund the new regulations through fees levied on regulated entities. The United Kingdom (UK) is moving to regulate the online world and its harms at the same time other jurisdictions are seeking to do the same like the European Union, Australia, Canada, New Zealand, and the United States (U.S.) Consequently, if this and other bills are enacted, the largest tech companies could face a number of different regulatory regimes regarding online content around the world.
In its press release, the Department for Digital, Culture, Media & Sport (DCMS) claimed that “[i]nternet users are one step closer to a safer online environment as the government’s new world-leading online safety laws are brought before parliament today.” DCMS continued:
§ The Online Safety Bill marks a milestone in the fight for a new digital age which is safer for users and holds tech giants to account. It will protect children from harmful content such as pornography and limit people’s exposure to illegal content, while protecting freedom of speech.
§ It will require social media platforms, search engines and other apps and websites allowing people to post their own content to protect children, tackle illegal activity and uphold their stated terms and conditions.
§ The regulator Ofcom will have the power to fine companies failing to comply with the laws up to ten per cent of their annual global turnover, force them to improve their practices and block non-compliant sites.
§ Today the government is announcing that executives whose companies fail to cooperate with Ofcom’s information requests could now face prosecution or jail time within two months of the Bill becoming law, instead of two years as it was previously drafted.
As noted earlier, the government has been releasing details about revisions slowly over the last six weeks about changes in the bill with respect to new criminal offenses, protecting children from pornography, online scam advertising, and anonymous trolling (here, here, here, and here.) It appears the government’s strategy is to build support for a bill many stakeholders did not like through hot button issues like children’s pornography in ways similar to how governments have used child exploitation to advocate against the use of encryption.
To understand the bill before us, we need to travel backwards a bit. In mid-December, after months of hearings and inquiry, the Joint Select Committee issued its final report along with recommendations for changes in the bill (see here for more detail and analysis.) The committee explained the history of the effort to address online harms:
The draft Online Safety Bill is the result of an extensive public policy and parliamentary process, going back nearly half a decade. The draft Bill was published by the Government on 12 May 2021. It followed the Online Harms White Paper, published in April 2019 and the Government’s interim (February 2020) and full (December 2020) responses to the consultation on it. The White Paper itself was the result of a commitment made in the Internet Safety Strategy Green Paper, published in October 2017.
In the report, the committee urged the government to strengthen the enforcement powers of the designated regulator of the new scheme, the Office of Communications (Ofcom), over online platforms, restructure the duties these companies will need to fulfill, expand the acts that will be illegal online (which will require the passage of companion legislation) such as cyberflashing, and establish a right for people to sue platforms for violations. The recommendations also focused on the design of platforms to promote some content over other content, and so the algorithms of platforms will be subject to duties to foresee and address the risks of extreme content and the delivery systems that promote such material.
Moreover, other committees in Parliament conducted their own inquiries and reported on online harms, offering their recommendations, including:
§ House of Commons Digital, Culture, Media and Sport Committee: The Draft Online Safety Bill and the legal but harmful debate
§ House of Commons Petitions Committee: Tackling Online Abuse
§ House of Lords Communications and Digital Committee: Digital regulation: joined-up and accountable
§ House of Commons Treasury Committee, Economic Crime
In any event, the government of Prime Minister Boris Johnson responded to the Joint Select Committee that examined the draft bill and stated “[w]e have strengthened our legislation by incorporating 66 of the Joint Committee’s recommendations.” Nonetheless, the government rejects the committee’s recommendation that the bill be restructured[1] and claims “[w]e agree with all of the objectives the Joint Committee has set out, and believe that the Bill already encapsulates and should achieve these objectives.” The government continued:
§ In terms of the specific restructure that the Committee suggested, we believe that using these objectives as the basis for Ofcom’s regulation would delegate unprecedented power to a regulator.
§ the fact that the Bill is complicated does not mean the framework itself will in practice be overly complicated for services, ranging from social media giants to the smallest online forums, to comply with and for Ofcom to enforce. Ofcom’s codes of practice (and any associated guidance) will provide detail and clarity for services as to what steps they need to take to comply with their legislative duties, taking into account their risk profile.
The Joint Committee relied on extensive testimony from stakeholders about what they saw as the confused and confusing nature of the draft bill’s framework, but it appears the government is dismissing these concerns.
Nonetheless, the government sought to stress some of the recommendations it accepted:
§ In recognition of the detrimental and devastating effects that fraudulent advertisements can have on those affected by them, we will introduce a new standalone duty in the Bill requiring Category 1 (the largest and highest risk services in scope) services to take action to minimise the likelihood of fraudulent adverts being published on their service.
§ We also support the Joint Committee’s recommendation of including priority offences on the face of the primary legislation. The government welcomes the committee’s recommendations on the issue of online fraud and we can confirm that fraud offences will be included in this list.
§ The Joint Committee, as well as many of the stakeholders that we have engaged with during the development of this legislation, has recommended further measures to tackle anonymous abuse online. We recognise this concern, and so have strengthened the Bill’s provisions by including two new additional duties on Category 1 services (the largest companies in scope) to ensure adult users are given the option to verify their identity, and tools to have more control over the legal content that they see as well as who they interact with. This would include providing adults with the option not to interact with unverified users.
§ As we announced on 4th February 2022, we accept the recommended offences on harm- based communications, false communications and threatening communications, as laid out in the Law Commission’s report. These new offences will be taken forward in the Online Safety Bill, and will help ensure that the criminal law is focused on the most harmful behaviour whilst protecting freedom of expression, by ensuring that communications that are intended to contribute to the public interest are not prosecuted.
§ Protecting children’s safety and rights is at the heart of the Online Safety Bill. We therefore accept the points made by the Joint Committee on the need to ensure children are protected from access to pornography on dedicated sites as well as on social media. To address this, we have added a provision to the Bill to require all service providers that publish or display pornographic content on their services to prevent children from accessing this content.
§ To achieve our aim of making the UK the safest place in the world to go online, we
must give Ofcom robust powers to enforce the online safety framework. We fully support the Joint Committee’s belief that the senior executives of services need to be held accountable for the actions these services take. With this in mind, the legislation will no longer defer the power to bring in the criminal sanctions for failures to comply with information notices, and this will instead be introduced as soon as possible after Royal Assent (likely to be two months after).
In the explanatory notes, the government explained “[t]he new legislation will impose legal requirements on:
§ Providers of internet services which allow users to encounter content generated, uploaded or shared by other users, i.e. user-generated content (“user-to-user services”);
§ Providers of search engines which enable users to search multiple websites and databases (“search services”);
§ Providers of internet services on which provider pornographic content is published or displayed.
In the bill, it is stated that “[a]ll providers of regulated user-to-user services must comply with the following duties in relation to each such service which they provide:”
§ A duty, in relation to a service, to take or use proportionate measures to effectively mitigate and manage the risks of harm to individuals, as identified in the most recent illegal content risk assessment of the service.
§ A duty to operate a service using systems and processes that allow users and affected persons to easily report content which they consider to be content of a kind specified below (with the duty extending to different kinds of content depending on the kind of service” regarding illegal content for all users, content harmful to children if children can access the content, and “content harmful to adults” for larger platforms (i.e. Category 1 services)
§ A duty to operate a complaints procedure that serves to allow users to lodge complaints that also include heightened requirements for children and Category 1 services
§ Regarding children, user-to-user services “that are likely to be accessed by children” have two duties: those related to children’s risk assessments and protecting children’s online safety. Specifically, these online platforms have
With respect to the duties on freedom of expression and privacy, user-to-user services have the following baseline duties with additional duties for Category 1 services (i.e. the largest user-to-user services):
§ When deciding on, and implementing, safety measures and policies, a duty to have regard to the importance of protecting users’ right to freedom of expression within the law
§ When deciding on, and implementing, safety measures and policies, a duty to have regard to the importance of protecting users from a breach of any statutory provision or rule of law concerning privacy that is relevant to the use or operation of a user-to-user service (including, but not limited to, any such provision or rule concerning the processing of personal data).
§ A duty to include clear and accessible provisions in the terms of service informing users about their right to bring a claim for breach of contract if content which they generate, upload or share is taken down, or access to it is restricted, in breach of the terms of service.
The Online Safety Bill spells out the increased protections due to children for those user-to-user services likely to be accessed by children:
A duty, in relation to a service, to take or use proportionate measures to effectively—
1. (a) mitigate and manage the risks of harm to children in different age groups, as identified in the most recent children’s risk assessment of the service, and
2. (b) mitigate the impact of harm to children in different age groups presented by content that is harmful to children present on the service.
A duty to operate a service using proportionate systems and processes designed to—
1. (a) prevent children of any age from encountering, by means of the service, primary priority content that is harmful to children (for example, by using age verification, or another means of age assurance);
2. (b) protect children in age groups judged to be at risk of harm from other content that is harmful to children (or from a particular kind of such content) from encountering it by means of the service (for example, by using age assurance).
Very similar duties apply to search service providers with respect to children.
The bill’s ambit is largely limited to the UK because the duties in the Online Safety Bill for user-to-user services will only apply to
§ the design, operation and use of the service in the United Kingdom, and
§ in the case of a duty that is expressed to apply in relation to users of a service, the design, operation and use of the service as it affects United Kingdom users of the service.
Category 1 services have duties above and beyond smaller platforms, including:
§ Duties related to protecting adults from harm, including placing the most recent adults’ risk assessment in the terms of service among others
§ A duty to include in a service, to the extent that it is proportionate to do so, features which adult users may use or apply if they wish to increase their control over harmful content.
§ A duty to operate a service using proportionate systems and processes designed to ensure that the importance of the free expression of content of democratic importance is taken into account when making decisions about—
o how to treat such content (especially decisions about whether to take it down or restrict users’ access to it), and
o whether to take action against a user generating, uploading or sharing such content.
§ A duty to operate a service using proportionate systems and processes designed to ensure that the importance of the free expression of journalistic content is taken into account when making decisions about—
o how to treat such content (especially decisions about whether to take it down or restrict users’ access to it), and
o whether to take action against a user generating, uploading or sharing such content.
Also, DCMS would designate which user-to-user services meet Category 1 thresholds. Moreover, there will also be Category 2A and 2B services, which the bill defines as:
§ a “Category 2A service” means a regulated search service or a combined service [that meets the to-be-defined threshold]
§ a “Category 2B service” means a regulated user-to-user service [that meets the to-be-defined threshold]
There is no additional detail in either the bill or explanatory notes on Category 2A and 2B services.
Search services would also be split into two groups with larger entities, likely Google and a few others like Bing again needing to meet heightened responsibilities. The government stated “[t]hose search services which meet the Category 2A threshold conditions will be under a duty to produce annual transparency reports and to put in place proportionate systems and processes to prevent the risk of users encountering fraudulent adverts.”
More generally, search service providers have the following duties:
§ A duty to carry out a suitable and sufficient illegal content risk assessment at regular intervals
§ A duty, in relation to a service, to take or use proportionate measures to effectively mitigate and manage the risks of harm to individuals, as identified in the most recent illegal content risk assessment of the service.
§ A duty to operate a service using systems and processes that allow users and affected persons to easily report search content which they consider to be content of a kind specified below (with the duty extending to content that is harmful to children depending on the kind of service
§ A duty to operate a complaints procedure in relation to a service that—
o allows for relevant kinds of complaint to be made (as set out under the headings below),
o provides for appropriate action to be taken by the provider of the service in response to complaints of a relevant kind, and
o is easy to access, easy to use (including by children) and transparent.
§ the duties about freedom of expression and privacy, including:
o When deciding on, and implementing, safety measures and policies, a duty to 30 have regard to the importance of protecting the rights of users and interested persons to freedom of expression within the law.
o When deciding on, and implementing, safety measures and policies, a duty to
have regard to the importance of protecting users from a breach of any statutory provision or rule of law concerning privacy that is relevant to the use 35 or operation of a search service (including, but not limited to, any such provision or rule concerning the processing of personal data).
§ the duties about record-keeping and review
Again, with respect to jurisdiction, the Online Safety Bill pertains to search service providers only with respect to:
§ the search content of the service,
§ the design, operation and use of the search engine in the United Kingdom, and
§ in the case of a duty that is expressed to apply in relation to users of a service, the design, operation and use of the search engine as it affects United Kingdom users of the service.
As mentioned above, the government made much of the fact that fraudulent advertising would be within the scope of the bill. All Category 1 and Category 2A services have duties with respect to fraudulent advertising, a term that encompasses a range of existing fraud crimes that are illegal in the offline world. During the Joint Committee’s deliberations, Members and witnesses called for making real world crimes crimes in the digital world. The government has heeded this call with respect to fraud.
In the explanatory notes, the government states that the Office of Communications (Ofcom) will be charged with drafting codes of practice for service providers, “setting out the recommended steps that providers can take in order to comply with the legal requirements.” The government noted:
A provider may take different measures to those recommended in the codes of practice. A provider will be treated as having complied with the relevant legal obligation if the provider takes the steps recommended in the relevant code of practice for complying with that obligation.
With respect to codes of practice, Ofcom needs to prepare and issue codes:
§ for providers of [search] services describing measures recommended for the purpose of compliance with duties [on illegal content] so far as relating to terrorism content.
§ for providers of [search] services describing measures recommended for the purpose of compliance with duties [on illegal content] so far as relating to CSEA content.
§ one or more codes of practice for providers of [search] services describing measures recommended for the purpose of compliance with the relevant duties (except to the extent that measures for the purpose of compliance with such duties are described in [another] code of practice…
§ for providers of Category 1 services and providers of Category 2A services describing measures recommended for the purpose of compliance with the duties set out in [the section on fraudulent advertising]
In the third sub-bullet above, there is reference to “relevant duties,” which means the duties related to:
§ illegal content
§ children’s online safety
§ adults’ online safety
§ user empowerment
§ content of democratic importance
§ journalistic content
§ content reporting
§ complaints procedures
The bill makes clear that compliance with a code of practice will be treated as compliance with a duty. Consequently, regulated entities will need to hew to the codes Ofcom promulgates if they want the easiest means of complying with some of the new duties.
Additionally, DCMS will play a supervisory role over the issuance of codes of practice, which must be laid before Parliament. Some of the codes such as those pertaining to CSEA must be passed by both houses before taking effect while others would be operative if Parliament does not act within a certain timeframe.
The British government will have sweeping powers of enforcement under the Online Safety Bill, and Ofcom specifically will have a dizzying realm of new enforcement responsibilities:
Ofcom would nonetheless be able to levy significant fines and seek criminal prosecution of companies for violations. The government explained:
§ The Bill confers new powers on OFCOM enabling them to act as the online safety regulator. OFCOM will be responsible for enforcing the legal requirements imposed on service providers. The Bill gives OFCOM the power to compel in scope providers to provide information and to require an individual from an in scope provider to attend an interview; powers of entry and inspection; and the power to require a service provider to undertake, and pay for, a report from a skilled person.
§ The new powers conferred on OFCOM also include the power to give enforcement notifications (which may set out the steps required to remedy a contravention) and the power to impose financial penalties of up to £18 million or 10% of qualifying worldwide revenue, whichever is greater. If a service provider fails to comply with a confirmation decision, OFCOM can, in certain circumstances, apply to the Courts for an order imposing business disruption measures on that provider.
The bill then gets into what may be the stickiest issues in a bill full of them. The Online Safety Bill tries to define terms like “Recognized News Publisher,” “Illegal Content,” “Content that is harmful to children,” and “Content that is harmful to adults.” Of course, these definitions will be crucial, and if a person with a Substack newsletter is not found to be a recognized news publisher, his or her content will be treated differently with fewer protections. What’s more, looking at just one of these definitions, illegal content, shows how things may get problematic quickly. This term is basically “content that amounts to a relevant offence,” and the latter term is defined as:
§ an offence specified in Schedule 5 (terrorism offences),
§ an offence specified in Schedule 6 (offences related to child sexual exploitation and abuse),
§ an offence specified in Schedule 7 (other priority offences), or
§ an offence, not within [the previous three categories], of which the victim or intended victim is an individual (or individuals).
Turning to the end of the bill, one references Schedule 7 and finds crimes like “Assisting suicide,” “Threats to kill,” “Drugs and psychoactive substances,” and “Sexual exploitation.” And so, at least as far as Schedule 7 goes, the Online Safety Bill will make offline crimes illegal in the digital world. Making things even more complex, Schedule 6 is split between those CSEA crimes in effect in England, Wales, and Northern Ireland on the one hand, and Scotland on the other. Overall, there is a considerable amount of ambiguity in terms of what is proscribed content that could be addressed in the House of Commons or House of Lords.
The Online Safety Bill “also requires providers of internet services which make pornographic material available by way of the service (as opposed to enabling users to generate or share such content) to ensure that children are not normally able to encounter that pornographic content.” There seems to be a significant loophole in the duty of service providers to make pornography off limits: user-generated pornography. User generated content is defined as “content—
§ that is—
o generated directly on the service by a user of the service, or
o uploaded to or shared on the service by a user of the service, and
§ that may be encountered by another user, or other users, of the service by means of the service.
Turning to Part 5 of the bill which details the ban on some pornographic content, it is stipulated that user-generated pornography uses the definition of user-generated content. There is a separate definition for “provider pornographic content” that seems to exclude user-generated pornographic content. Moreover, it is stated that “[p]ornographic content that is user-generated content in relation to an internet service is not to be regarded as provider pornographic content in relation to that service.” This sounds very much like non-professional pornography posted by a user to an internet service would not need to be policed with respect. This seems like a loophole in making sure children cannot normally encounter pornography.
Finally, the bill “also replaces existing communications offences with three new communications offences: a harmful communications offence, a false communications offence and a threatening communications offence, as well as the creation of a new “cyberflashing” offence.”
Ofcom will charge fees to those services it regulates to offset the costs of regulation based on a company’s qualifying worldwide revenue. This model would likely result in U.S. multinationals being taxed to fund British regulation. I imagine Washington will have something to say about this funding mechanism. In any event, DCMS will exercise discretion over Ofcom in this regard and the fees must also be laid before Parliament as additional means of oversight.
Other Developments
Photo by Prakash Khanal on Unsplash
Australia’s House of Representatives’ Select Committee on Social Media and Online Safety issued a report from its “Inquiry into Social Media and Online Safety.”
The White House’s National Science and Technology Council (NSTC) and Office of Science and Technology Policy (OSTP) published a report “A Plan To Advance Data Innovation.”
United States (U.S.) Senators Ben Ray Luján (D-NM), Cynthia Lummis (R-WY) and Ron Wyden (D-OR.) introduced the “Fair Repair Act of 2022” (S.3830) “which will protect consumers, farmers, and small businesses by ensuring the right to repair.”
The Iowa House of Representatives passed House File 2506, a data protection bill, by a 91-2 vote, and it needs to be reported out of Senate committee by 18 March in order to advance.
Ireland’s Data Protection Commission (DPC) “published a statistical report on the DPC’s handling of cross-border complaints under the GDPR’s One-Stop-Shop (OSS) mechanism.”
The United States Government Accountability Office (GAO) issued reports titled “Management Report: Improvements Needed in the Bureau of the Fiscal Service's Information System Controls Related to the Schedule of Federal Debt” and “Science & Tech Spotlight: Counter-Drone Technologies”
The White House’s Science and Technology Council (NSTC) and Office of Science and Technology Policy (OSTP) published a “Critical and Emerging Technologies List Update.”
The United States (U.S.) Federal Trade Commission and the Justice Department’s Antitrust Division will co-host a Spring Enforcers Summit on 4 April.
The United States (U.S.) Federal Communications Commission (FCC) took the following action at its 16 Match open meeting: initiating an “Inquiry on Preventing Digital Discrimination;” announcing the Final Group of Connected Care Pilot Program Projects; requesting “Input on Resolving Disputes Over Costs for Pole Replacements;” and revoking “Pacific Networks' & ComNet's Telecom Service Authority.”
The United Kingdom’s (UK) Information Commissioner’s Office (ICO) “announced fines totalling £405,000 to five companies responsible for over 750,000 unwanted marketing calls targeted at older, vulnerable people.”
The United States (U.S.) National Institute of Standards and Technology (NIST) issued “for public comment an initial draft of the AI Risk Management Framework (AI RMF)…[that] addresses risks in the design, development, use, and evaluation of AI systems.” NIST “also released “Towards a Standard for Identifying and Managing Bias within Artificial Intelligence” (SP 1270), which offers background and guidance for addressing one of the major sources of risk that relates to the trustworthiness of AI.”
The European Commission “launched a public consultation to gather the views and experiences of all relevant parties on the forthcoming European Cyber Resilience Act…[that] seeks to establish common cybersecurity rules for digital products and associated services that are placed on the market across the European Union.”
Tweets of the Day
Further Reading
Photo by Carrie Allen on Unsplash
“Why the Kremlin is still active on Facebook, Twitter and YouTube” By Will Oremus and Cristiano Lima — Washington Post
“The grassroots giant: How Google became a lobbying powerhouse” By Ben Brody — Axios
“‘A torrent of abuse’: victims pin hopes on UK online safety bill” By Dan Milmo and Hibaq Farah — The Guardian
“How WordPress and Tumblr are keeping the internet weird” By Nilay Patel — The Verge
“China-UAE ties raise US technology safety questions for lawmakers” By Gopal Ratnam — Roll Call
“Substack’s Ideology” By Nathan Baschez — Every
“Amazon closes MGM deal” By Margaret Harding McGill — Axios
“A Big Bet to Kill the Password for Good” By Lily Hay Newman — WIRED
“Tensions in House GOP over how to go after Big Tech are boiling over” By Cristiano Lima and Aaron Schaffer — Washington Post
“Cyber Troops Stretched Thin in Ukraine Response as NATO Builds Common Air Picture” By Shaun Waterman — Air Force Magazine
“Facebook and Instagram users not allowed to call for death of Putin” By Dan Milmo — The Guardian
“Russian propaganda on Ukraine's non-existent 'biolabs' boosted by U.S. far right” By Ben Collins and Kevin Collier — NBC News
“Israel says government sites targeted by cyberattack” — Al Jazeera
“‘We are unstoppable’: How a team of Polish programmers built a digital tool to evade Russian censorship” By Emma Vail — The Record
Coming Events
§ 17 March
o The European Union Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) will hold a hearing on the “General Data Protection Regulation implementation, enforcement and lessons learned”
o The United States (U.S.) Federal Trade Commission (FTC) will hold an open meeting with this tentative agenda:
§ Staff Presentation on the E-Cigarette Report for 2015-2018: Staff will present findings from a report on the sales and marketing of e-cigarettes, with a particular focus on the use of these products by youth.
o The United States (U.S.) Senate Banking, Housing, and Urban Affairs Committee will hold a hearing titled “Understanding the Role of Digital Assets in Illicit Finance.”
o The United States (U.S.) House Science, Space, and Technology Committee’s Research and Technology Subcommittee will hold a hearing titled “Setting the Standards: Strengthening U.S. Leadership in Technical Standards.”
o The United Kingdom’s House of Commons will convene a general committee for a morning formal meeting on the “Product Security and Telecommunications Infrastructure Bill.”
o The United Kingdom’s House of Commons will convene a general committee for an afternoon formal meeting on the “Product Security and Telecommunications Infrastructure Bill.”
o The United Kingdom’s House of Lords’ Fraud Act 2006 and Digital Fraud Committee will hold an oral evidence hearing.
o New Zealand’s Parliament’s Economic Development, Science and Innovation Committee will hold a private session on the Digital Identity Services Trust Framework Bill.
§ 21 March
o The European Parliament’s Internal Market and Consumer Protection and Civil Liberties, Justice and Home Affairs committees will hold a joint hearing on the proposal for an Artificial Intelligence Act.
§ 6 April
o The European Data Protection Board will hold a plenary meeting.
§ 15-16 May
o The United States-European Union Trade and Technology Council will reportedly meet in France.
§ 16-17 June
o The European Data Protection Supervisor will hold a conference titled “The future of data protection: effective enforcement in the digital world.”
[1] Joint Committee on the draft Online Safety Bill, Draft Online Safety Bill, Report of Session 2021–22
60. We recommend that the Bill be restructured to contain a clear statement of its core safety objectives—as recommended in paragraph 52. Everything fows from these: the requirement for Ofcom to meet those objectives, its power to produce mandatory codes of practice and minimum quality standards for risk assessments in order to do so, and the requirements on service providers to address and mitigate reasonably foreseeable risks, follow those codes of practice and meet those minimum standards. Together, these measures amount to a robust framework of enforceable measures that can leave no doubt that the intentions of the Bill will be secured.
61. We believe there is a need to clarify that providers are required to comply with all mandatory Codes of Practice as well as the requirement to include reasonably foreseeable risks in their risk assessments. Combined with the requirements for system design we discuss in the next chapter, these measures will ensure that regulated services continue to comply with the overall objectives of the Bill—and that the Regulator is aforded maximum fexibility to respond to a rapidly changing online world.