The Legal Risks of AI Disruption in Investment Research

Published On Sat May 13 2023
The Legal Risks of AI Disruption in Investment Research

Wall Street research & ChatGPT: Firms face legal risks over AI disruption

Artificial intelligence (AI) disruption is arriving at investment research, particularly in the production of daily reports by analysts. The biggest investment banks and other financial institutions may be considering the use of AI applications in research content, but they need to consider the unclear and thorny legal risks of using technology before they begin.

The impact of AI on US investment banks and brokerage firms is vast. A recent report by Goldman Sachs estimated that 35% of employment in business and financial operations is exposed to generative artificial intelligence that can generate novel, human-like output. One such example is ChatGPT, a generative AI product from research laboratory OpenAI.

The Goldman Sachs report did not examine how AI would specifically impact investment research. However, Joseph Briggs, one of the report's authors, said that "equity research is a bit more highly exposed, at least on an employment-weighted basis."

The question is how AI will replace human input and analysis in less nuanced Wall Street tasks such as company earnings projections or more fundamental industry research. According to new academic research, ChatGPT can perform certain Wall Street tasks as well as experienced analysts.

The Federal Reserve Bank of Richmond conducted a study and used Generative Pre-training Transformer (GPT) models to analyze the technical language used by the Federal Reserve in its monetary policy decisions. Experts whose job it is to predict future monetary policy decisions apply a mix of technical and interpretive skills in reading through the often opaque and obscure language that Fed officials use in their communications with the public.

The analysis revealed that GPT models "demonstrate a strong performance in classifying Fedspeak sentences, especially when fine-tuned." Despite its impressive performance, however, GPT-3 is not infallible. It may still misclassify sentences or fail to capture nuances that a human evaluator with domain expertise might capture.

Given the regulatory vacuum on specific rules for Wall Street research, firms are warned about the use and disclosure of AI and ChatGPT in their research products. Mary Jane Wilson-Bilik, a partner at the law firm Eversheds Sutherland in Washington, D.C., cautioned that it would be best for firms to tell clients that AI was used in the writing of a report or analysis. She added that some firms are adding language about the possible use of AI into their online privacy policies.

If firms use AI in a misleading or deceptive way, that would be a problem under the anti-fraud statutes and could lead to regulatory action or litigation. Therefore, AI tools need to be checked for accuracy and bias and require robust guardrails.