UK Government proposes a new approach to regulating artificial intelligence (AI) | Perspectives & Events
The UK Government published the AI Regulation Policy Paper on 18 July 2022. The Policy Paper sets out the Government’s vision for the future “pro-innovation” and “context-specific” AI regulatory regime in the UK.
The Policy Paper outlines six cross-sectoral AI governance principles and confirms that the UK Government is not currently planning to introduce new legislation in the UK to regulate AI. However, the UK Government plans to ask existing regulators to interpret and implement the cross-sectoral principles that will be at the heart of UK’s new AI regulatory regime. The Policy Paper forms part of the UK Government’s National AI Strategy1 and its AI Action Plan.2
Organisations that use or sell AI in the UK should monitor the upcoming AI White Paper (expected in late 2022) and announcements from the relevant regulators. Businesses should also consider who is responsible for AI governance and risk management strategy within their organisation and prepare to align their internal AI strategy with the proposed principles.
The Policy Paper presents an early proposal for six cross-sectoral principles that the UK Government is planning to ask regulators to apply in their sector or domain:
- Ensure that AI is used safely: Safety is likely to be a core consideration in certain sectors (such as healthcare or critical infrastructure). However, the Policy Paper suggests that all regulators should take a context-based approach when determining the likelihood of AI posing a risk to safety and take a proportionate approach to managing this risk.
- Ensure that AI is technically secure and functions as designed: AI systems should be technically secure and work as they claim and intend to do. The Policy Paper envisages that the functioning, resilience and security of AI systems are tested (subject to context and proportionality considerations) and regulators set out the regulatory expectations in their relevant sector or domain.
- Make sure that AI is appropriately transparent and explainable: The Policy Paper acknowledges that AI systems cannot always be meaningfully explained and in most situations this is unlikely to pose substantial risk. However, the Policy Paper suggests that in certain high-risk settings, decisions that cannot be meaningfully explained might be prohibited by the relevant regulator (for example, a tribunal decision where the lack of explainability would deprive the individual of a right to challenge the tribunal’s decision).
- Embed considerations of fairness into AI: The Policy Paper proposes that regulators define “fairness” in their sector or domain and outline when fairness considerations are relevant (for example, in the case of job applications).
- Define legal persons’ responsibility for AI governance: The Policy Paper confirms that accountability for the outcomes produced by AI systems and legal liability must always rest with an identified or identifiable legal person.
- Clarify routes to redress or contestability: According to the Policy Paper, the use of AI should not remove the right to contest a decision where such right is available to individuals and groups outside the AI setting. Therefore, the UK Government will expect regulators to ensure that outcomes of AI systems can be contested in “relevant regulated situations”.
The proposed principles build on the five OECD AI Principles3 and highlight the areas where the UK Government sees the most risk with the use of AI.
The Policy Paper also confirms that the UK Government will ask the regulators to focus on high-risk concerns (rather than hypothetical or low risks associated with the use of AI) and to consider lighter touch options for regulation (such as issuing guidance or encouraging voluntary measures).
The Policy Paper identified the Information Commissioner’s Office (ICO), Competition and Markets Authority (CMA), Ofcom, Medicine and Healthcare Regulatory Authority (MHRA), and Equality and Human Rights Commission (EHRC) as the key regulators in its new regime.
While many UK regulators4 and UK Government departments5 have already started to take action to support the responsible use of AI, the Policy Paper highlights some of the current challenges faced by businesses, including a lack of clarity, overlaps, and inconsistency between different regulators.
The risk of multiple regulators being asked to interpret and enforce a set of common principles is that businesses will be given inconsistent or contradictory guidance or guidance which leads to duplication of efforts. The Policy Paper acknowledged this risk and stressed that the UK Government is exploring options for encouraging regulatory coordination through platforms such as the Digital Regulation Cooperation Forum (DRCF)6 to ensure coherence among the regulators and to support innovation.
The UK Government recognises that regulators will need access to the necessary skills and expertise to effectively regulate AI. Although we have seen a number of regulators investing in their AI capabilities over the past several months, it is currently not clear that the regulators will be able to keep pace with investment in AI capabilities from the business sector. The Policy Paper mentioned that the UK Government will explore the possibility of pooling resources and capabilities among multiple regulators, as well as the options for secondments from businesses and academia, to help regulators access the skills and expertise needed.
Comparison to the European Commission’s AI Act proposal
The European Commission’s proposal for an AI Act published in April 2021 and the UK Government’s Policy Paper set out differing views for regulation of AI in Europe and show one of the first major divergences in regulatory approach between the EU and the UK post-Brexit. We have summarised some of the major differences:
- Sector-specific approach: Unlike the EU’s AI Act proposal, the Policy Paper sets out a de-centralised approach to AI regulation. The UK Government rejected the idea of creating a single regulator with a new mandate and enforcement powers responsible for regulating AI across all sectors. Instead, the UK Government plans to leverage the experience and expertise of existing regulators and ask them to issue guidance to highlight the relevant regulatory requirements applicable to businesses they regulate (such as any requirements for meeting sector-specific licences or standards or appointing named individuals to assume particular responsibilities). The UK Government also hopes that this de-centralised approach will be more adaptable to technological change.
- No central list of prohibited or high-risk use cases: The EU’s AI Act proposal includes a list of prohibited AI practices that are unacceptable in all circumstances (including certain uses of real-time remote biometric identification) as well as a list of high-risk AI systems which have to undergo a conformity assessment and comply with strict requirements in the AI Act. On the other hand, the Policy Paper does not seek to band specific uses of AI but will leave it up to regulators to decide if the use of AI in specific scenario should not be allowed or should be subject to higher regulatory burden.
- No new legislation (at least for now): The EU’s AI Act is a proposal for a new regulation which would be directly applicable in all EU Member States. On the other hand, the UK Government proposes to initially put the cross-sectoral principles on a non-statutory footing, for example, by issuing executive guidance or a specific mandate to regulators without introducing new legislation. However, the UK Government has not ruled out proposing new legislation where and when needed to ensure effectiveness of the new regulatory framework. Alongside the Policy Paper, the UK Government is proposing changes to existing UK legislation to make the use of AI in the UK easier (such as proposing amendments to Article 22 of the UK General Data Protection Regulation7 or introducing a new text and data mining exemption for any purpose in the Copyright, Designs and Patents Act 19888).
The UK Government is seeking initial views from stakeholders on the proposal set out in the Policy Paper. The call for views is open until 26 September 2022.
Following the call for views, the UK Government is expected to publish an AI White Paper in late 2022 which will set out more concrete proposals for AI regulation in the UK.
What should businesses do now?
- Senior leaders should consider who is responsible for AI governance and risk management strategy within their organisation.
- Businesses should review their internal AI strategy and the proposed principles and consider what steps they will need to take to align the strategy with the new AI regulatory frameworks that are emerging in the EU, UK and elsewhere.9
- Organisations that use AI in the UK or licence AI for use in the UK should monitor the upcoming AI White Paper and announcements from the relevant regulators about how they will interpret, implement and enforce the cross-sectoral principles.
For example, the ICO published Guidance on AI and Data Protection and Guidance on Explaining decisions made with AI, the Bank of England and FCA published the AI Public-Private Forum Final Report, the FCA commissioned The Alan Turing Institute to publish a Report on AI in Financial Services, the CMA published a report on Algorithms: How they can reduce competition and harm consumers, Ofcom commissioned a report on the Use of AI in Online Content Moderation, and the MHRA launched the Software and AI as a Medical Device Change Programme.
5 For example, the UK Government’s Office for AI published Guidelines for AI Procurement and Guidance on Ethics, Transparency and Accountability Framework for Automated Decision-Making in the public sector, and the UK Ministry of Defence (MoD) published a policy statement on Ambitious, safe, responsible: our approach to the delivery of AI-enabled capability in Defence which is relevant to MoD suppliers.