UK Regulators Share Views on Artificial Intelligence and Machine Learning Regulation
On 1 February 2024, the UK government’s Department for Science, Innovation and Technology and His Majesty’s Treasury addressed a joint letter to the Bank of England (BoE) and the Prudential Regulation Authority (PRA) requesting an update on their strategic approach to the adoption and implementation of artificial intelligence (AI) and machine learning (ML) in the financial services sector.
Background
The UK government’s AI white paper sets out a pro-innovation approach to regulating AI, recognising that AI and ML have the potential to rapidly enhance efficiency in the financial services sector, both in the UK and globally. The BoE and the PRA have been collaborating with the Financial Conduct Authority (FCA) to support the safe adoption and use of AI and ML in this sector. In response to the request for an update on their respective strategies, the regulators have reviewed how AI and ML are currently used by financial services firms – and what potential implications this might have for their respective regulatory objectives and the current overarching regulatory framework.
Broadly, the analyses carried out show that the current framework appears fit for purpose and suitably flexible to deal with current and future challenges and opportunities posed by AI systems and models. However, it is acknowledged that certain unforeseen risks, gaps, and shortcomings might require further guidance or intervention in the future.
Regulatory objectives
Generally, the BoE and the PRA take a ‘technology-agnostic’ approach to regulation, and they do not usually mandate or prohibit specific technologies. However, the regulators recognise that certain technologies and their risks might have an impact on regulatory objectives.
BoE
The BoE is the UK’s central bank, and it regulates UK banks, other financial firms and certain financial market infrastructure (FMI) services. The BoE’s statutory objectives are to maintain monetary and financial stability in the UK. As part of this, the BoE’s Monetary Policy Committee and Financial Policy Committee have secondary objectives of supporting the UK government’s economic policy and enhancing resilience of the UK financial system. Under the Financial Services and Markets Act 2023, the BoE has a secondary objective to facilitate ‘innovation in the provision of FMI services’.
PRA
Part of the BoE, the PRA is responsible for the prudential regulation and supervision of 1,330 banks, building societies, credit unions, insurers, and investment firms. The PRA’s objectives are to promote the safety and soundness of the firms it regulates and contributing to securing an appropriate degree of protection for insurance policyholders. Its two secondary objectives are to facilitate effective competition between firms and the international competitiveness of the UK economy and its growth in the medium to long term.
FCA
The FCA regulates the financial services industry in the UK with a focus on protecting consumers and keeping the industry stable, as well as protecting healthy competition between financial service providers.
In view of their respective obligations, and given AI’s potential for driving efficiencies and innovation, the regulators acknowledge their need to understand how to support the safe and responsible adoption of AI and ML in the financial services sector.
The regulators have together issued a discussion paper (DP5/22) on AI and ML, which saw input from a variety of stakeholders confirming that the regulators’ approach is broadly consistent with the government’s five principles set out below. These principles are a driving force for the regulation of AI/ML to support innovation, proportionality, trustworthiness, adaptability, clarity and collaboration amongst regulators as part of the envisioned future regulatory framework.
Implementing UK’s AI regulatory principles
The BoE and PRA have reviewed their current policies against the UK government’s five principles. Whilst acknowledging that AI and ML pose novel risks, the broad consensus amongst stakeholders appears to be that the current framework remains fit to address any such issues.
1. Safety, security and robustness
The UK government’s definition establishes that ‘AI systems should function in a robust, secure and safe way throughout the AI life cycle’, specifically acknowledging that ‘risks should be continually identified, assessed and managed’. The regulators recognise relevant concerns, specifically those related to outsourcing and third-party risk management, and queried whether this could be improved in DP5/22. Whilst the FCA and the PRA acknowledge that this might be an area of development, their respective policies place the onus on the regulated firms to manage such risk from third parties in a proportionate manner.
2. Appropriate transparency and explainability
AI systems should be transparent, and those using it should be able to explain how certain outcomes have been reached. DP5/22 flagged certain risks if the principles of transparency and explainability are not adhered to, but the regulators appreciate that the model risk management principles for banks place the onus on banks to establish a risk-based approach to address these concerns. Additionally, specific requirements are in place for the processing of personal data under the UK General Data Protection Regulation (GDPR), which also applies to financial services firms. The current regulatory framework has therefore been deemed to sufficiently address risk components posed by the data life cycle within AI/ML systems and models. It is recognised, however, that this might not go far enough, given the growing complexity of AI/ML models, and this was one of the few areas flagged for potential further developments or guidance.
3. Fairness
The government states that ‘AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes’. The BoE and PRA appreciate this requirement, but acknowledge that the focus of this requirement is on the interaction with consumers and, therefore, falls mainly within the remit of the FCA. Banks are subject to the Equality Act 2010, which is recognised to limit certain risks in this area. Further, as before, the regulators outline that the expectation lies on firms to define what fairness means for their operation and to provide suitable justifications for their approach.
4. Accountability and governance
The government states that ‘governance measures should be in place to ensure effective oversight of the supply and use of AI systems’. It expects clarity of accountability across the whole life cycle of an AI system. The regulators acknowledge that these requirements are deemed to be suitably covered by the Senior Managers and Certification Regime, which is implemented via the PRA Rulebook. Further, a framework for model risk management has been set out in relation to specific expectations on model risk. Both these measures aim to improve the overall allocation of responsibility if regulated firms make material use of AI/ML and breach their responsibilities in terms of overall risk management of technology systems. As part of DP5/22, the BoE specifically sought feedback on whether there should be a dedicated senior management function for AI; however, the responses received deemed the current regulatory framework as suitable to manage AI risk. The model risk management principles for banks also set out requirements in respect to government oversight, requiring a senior management function to become responsible for the model risk management framework, its implementation and execution of the framework.
5. Contestability and redress
The government requires that ‘users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm’. It is acknowledged that this requirement sits with consumer-facing regulators such as the FCA, and the PRA and BoE have therefore not taken further measures. This also is partly addressed by the UK GDPR, whereby consumers who are subject to decisions based on automated processing have a right to redress. The BoE and PRA do not deem that any further action is necessary in this realm.
Current and future developments for managing risks
As part of DP5/22, the BoE and the PRA have gathered feedback from multiple stakeholders on whether further clarifications or legal requirements might be helpful to manage the risk of AI adoption responsibly. As discussed above, this survey deemed the current framework as broadly fit for purpose – and the paper did not provide evidence for the necessity of an AI-specific framework. However, it did indicate a limited number of areas where further guidance might be welcome:
- The current regulator landscape for data management is too fragmented, which stakeholders find challenging to reconcile.
- Model risk management principles for banks came into effect in May 2024, also addressing certain expectations for governance to show which management function will assume overall responsibility for the model risk management framework.
- The BoE, PRA and the FCA are in the process of collectively assessing their strategic approach to critical third parties – and what regulatory input might be necessary to mitigate risk.
Regulatory collaboration
Overall, the BoE, PRA and FCA are in continuous conversation amongst each other and with relevant stakeholders, in order to provide effective solutions to any risks perceived to be posed by AI and ML for the financial services sector.
The PRA and BoE work closely with the FCA to continuously assess the opportunities and risks posed by the adoption of AI/ML processes by regulated firms. To understand the current use of such systems in the UK, the PRA, BoE and FCA published a joint survey, which identified that adoption was indeed increasing. This flagged the need for risk management protocols. The regulators issued the AI Public-Private Forum in 2020, bringing together a group of experts within financial services, technology, academia and the public sector in order to assess primary drivers of AI risk. The firms conducted further surveys, including the DP5/22, in order to assess whether there are any gaps in regulation or if further guidance should be issued.
Takeaways
UK regulated firms should continue watching this space. While the regulators’ current view is that the existing regulatory framework is broadly capable of facing the risks and challenges posed by the AI and ML, this is still a new and rapidly developing area of technology and law. The firms should expect further guidance from the regulators, probably putting even more onus on the firms themselves to ensure that their regulatory obligations are met when using AI and ML to provide regulated services.
Cooley trainee solicitor Pia Pyrtek also contributed to this alert.
This content is provided for general informational purposes only, and your access or use of the content does not create an attorney-client relationship between you or your organization and Cooley LLP, Cooley (UK) LLP, or any other affiliated practice or entity (collectively referred to as “Cooley”). By accessing this content, you agree that the information provided does not constitute legal or other professional advice. This content is not a substitute for obtaining legal advice from a qualified attorney licensed in your jurisdiction and you should not act or refrain from acting based on this content. This content may be changed without notice. It is not guaranteed to be complete, correct or up to date, and it may not reflect the most current legal developments. Prior results do not guarantee a similar outcome. Do not send any confidential information to Cooley, as we do not have any duty to keep any information you provide to us confidential. This content may be considered Attorney Advertising and is subject to our legal notices.