Banks' use of ChatGPT-like AI comes under CFPB's watch
The Consumer Financial Protection Bureau (CFPB) is examining how generative AI tools, such as ChatGPT, could create potential risks in customer care if biases or misinformation are introduced by banks. The CFPB is keeping a close eye on the banking industry which is considering the potential benefits of deploying generative AI technology in its interactions with customers. Banks are exploring ways to use it for offering personalized experiences to customers, mainly for customer service with chatbots and virtual agents. The technology has the potential to create a more personalized user experience that provides better support.
Peter-Jan van de Venn, Vice President of Global Digital Banking at digital consultancy Mobiquity, said that the banking industry needs to exercise caution and only deploy generative AI in non-sensitive cases which do not reference client information or a bank's data. The CFPB is working to teach tech whistle-blowers to alert the agency when their own technology may be violating the law. Regulators need to lay out more guidelines on how generative AI can be used in financial services.
The banking industry is eyeing potential benefits of deploying generative AI technology in its interactions with customers. Bank of America is working to expand its AI-powered digital assistant Erica's capabilities to include personalized banking, investing, credit, and retirement-planning advice. However, when asked about Bank of America's vision for the tool in light of ChatGPT's growing popularity, CEO Brian Moynihan said the lender would maintain a cautious approach.
The growth of machine learning, generative AI, and the attention surrounding ChatGPT have raised questions about security and bias in multiple sectors. The CFPB aims to crack down on "unchecked AI" in lending, housing, and employment through an interagency initiative alongside the Justice Department, Federal Trade Commission, and Equal Employment Opportunity Commission. These agencies commit to enforcing their respective laws and regulations to rein in unlawful discriminatory practices perpetrated by companies that deploy AI technologies.