The fast-evolving technological landscape, particularly in the field of artificial intelligence (AI), continues to provide huge opportunities for growth in the financial services sector. As ever, the important question remains as to how to strike the right balance between not stifling innovation and safeguarding against the risks posed. On 21 May 2024, the Bank of England (BoE) published a speech by Randall Kroszner, an external member of both the BoE's Financial Policy Committee (FPC) and its Financial Market Infrastructure Committee (FMIC).
Of interest, the key points to note are:
- Change is rapidly occurring in the financial services sector with the widespread adoption of financial technology (FinTech), specifically AI. The right level of support needs to be provided to FinTech businesses, and the services they offer, to ensure they are able to evolve at pace with the world around them, while keeping a close eye on financial stability risks.
- According to the 2022 Machine Learning (ML) Survey conducted by the BoE and the Financial Conduct Authority (FCA), 72% of financial services respondents reported using or developing ML applications. Firms are predominantly developing or using ML for customer engagement (28%), risk management (23%), and support functions like human resources and legal departments (18%). Industry engagement suggests that firms, particularly large traditional financial institutions, are typically using ML to improve their overall efficiency and productivity.
- Given that it is generally accepted that AI has the ability to significantly boost productivity growth, which the UK needs to embrace, the FPC and the FMIC both have a key role to play in meeting the BoE’s financial stability objectives. The exact impact AI, and financial technology, will have on the economy comes down to a question of speed and scale, which brings with it uncertainty.
- Creating an environment that ensures for both financial stability and innovation is, of course, challenging when dealing with the potential for fundamentally disruptive innovation that AI could bring, versus the more traditional case where innovation and change is more incremental. Whilst recognising that much of the terrain here is new, a thoughtful approach is naturally required, but the fact that the challenges may seem daunting, and perhaps even difficult to contemplate, is not an excuse for inaction.
- Given the lack of data, it is very challenging for regulators to understand what action they should take in the face of disruptive innovation, in particular, and how they balance their financial stability goals.
- Regulators need to, however, be receptive to new approaches. Safe environments can be created to foster innovation in a “technology friendly” space. This is the intention of the Digital Securities Sandbox (DSS), a regime currently under consultation with the FCA, which will allow firms to use developing technology, such as distributed ledger technology, in the issuance, trading and settlement of securities such as shares and bonds. However, fundamentally disruptive innovations, such as ChatGPT and other AI tools, often involve the potential for extraordinarily rapid scaling that will test the limits of such regulatory tools. A sandbox approach may not be appropriate in these cases.
- Misalignment is also another challenge. This relates to the concern that as soon as AI systems can act and plan in accordance with some specific goals they may, no matter how benign they are initially, begin to become misaligned with humanity’s needs and values in the pursuit of their key objective. Policymakers and regulators will need to grapple with this issue; and a ‘constitutional’ approach may be the one to take.
- Operational resilience is also key and being able to learn from operational disruptions, such as cyber-attacks and internal process failures, and how best to respond, may be crucial.
- It is still all at a relatively early stage but remaining alert and establishing a constructive dialogue amongst key players is likely to be vital. In a letter published on 22 April 2024, the BoE and PRA informed the government of the work they have undertaken to date relating to AI and ML and how this work fits in with their statutory objectives and remit. The letter included that regulators are now considering establishing a follow-up industry consortium to the AI Public-Private Forum (AIPPF) that the BoE and FCA ran between 2020-2022 examining the challenges of using AI and ML within financial services.
As AI and other financial technologies continue to evolve at a rapid pace, this will certainly be an interesting area for insurers already involved in this space to watch.