Consumers and the UK financial system are facing “serious harm” due to the failure of the Government and the Bank of England to get to grips with the risks posed by artificial intelligence, an influential parliamentary committee has warned.
In a new report, MPs on the Treasury committee criticize ministers and City regulators, including the Financial Conduct Authority (FCA), for taking a “wait and see” approach to the use of AI in the financial sector.
This is despite growing concerns over how the advancing technology could harm already vulnerable consumers, or even cause financial distress, if AI-led companies make similar financial decisions in response to economic shocks.
More than 75% of the city’s companies now use AI, with insurers and international banks the biggest adopters. It is being used to automate administrative tasks or even help with core tasks, including processing insurance claims and assessing the credit-worthiness of customers.
But the UK has failed to develop any specific laws or rules to govern the use of AI, with the FCA and the Bank of England claiming that general rules are enough to ensure positive outcomes for consumers. That means businesses will have to determine how existing guidelines apply to AI, which lawmakers are worried could threaten consumers and financial stability.
“It is the responsibility of the Bank of England, the FCA and the government to ensure that the safety net within the system is maintained,” said Meg Hillier, chair of the Treasury committee. “Based on the evidence I have seen, I am not confident that our financial system would be prepared if there were a major AI-related incident, and that is worrying.”
The report highlights a lack of transparency about how AI could influence financial decisions, potentially affecting vulnerable consumers’ access to credit or insurance. It added that it is also unclear whether data providers, technology developers or financial firms will be held responsible if things go wrong.
MPs said AI also increased the potential for fraud and the spread of unregulated and misleading financial advice.
In terms of financial stability, lawmakers found that increasing AI use has increased companies’ cybersecurity risks, and made them dependent on a small number of US tech companies like Google for essential services. Its effect may also increase “herd behavior”, causing businesses to make similar financial decisions during economic shocks and raise the “risk of financial distress”.
The Treasury Committee is now urging regulators to take action, including new stress tests that will assess cities’ preparedness for AI-driven market shocks. MPs also want the FCA to publish “practical guidance” by the end of the year, clarifying how consumer protection rules apply to the use of AI, and who will be held responsible if consumers suffer any harm.
“By taking a wait-and-see approach to AI in financial services, the three authorities are risking potentially serious harm to consumers and the financial system,” the report said.
The FCA said it had already “done extensive work to ensure that companies are able to use AI in a safe and responsible way”, but would review the report’s findings “carefully”.
A Treasury spokesperson said: “We are clear that we will strike the right balance between managing the risks posed by AI and unlocking its huge potential.”
He said this included working with regulators to “strengthen our approach as the technology evolves” and appointing new “AI champions” covering financial services to ensure we take advantage of the opportunities presented in a safe and responsible way.
A Bank of England spokesperson said it had “already taken proactive steps to assess AI-related risks and strengthen the resilience of the financial system, including publishing a detailed risk assessment and highlighting the potential impacts of sharp declines in AI-impacted asset prices. We will carefully consider the committee’s recommendations and will provide a full response in due course.”