Wall Street research & ChatGPT: Firms face legal risks over AI-generated reports
Artificial Intelligence (AI) is disrupting various sectors, and the investment research sector is no exception. Investment banks and other financial firms generate a plethora of reports and analysis daily, and the use of AI to generate these reports may lead to a challenge to the status quo. However, the legal implications associated with using ChatGPT and other AI-powered applications need to be discussed before its widespread adoption.
A recent report published by Goldman Sachs estimates that 35% of employment in the business and finance industry is exposed to generative artificial intelligence, which generates novel, human-like output. This output can also be relevant to equity research. While AI-generated reports and analysis are a feasible option, new academic research has shown that AI-powered applications can perform specific tasks that experienced analysts carry out while dealing with technical and interpretive skills.
However, regulators do not have any specific guidelines or rules for Wall Street research yet. The state of AI regulation in the United States is in its early stages. Regulatory agencies have issued guidelines, but laws specific to AI-powered applications are relatively few. This lack of specific guidelines has created an uncertain legal environment for firms using AI-powered applications, including ChatGPT. In late April, four federal agencies in the United States issued a joint statement to warn of the escalating threat from fast-growth AI applications, including ChatGPT and other rapidly evolving automated systems.
The Securities and Exchange Commission (SEC) has indicated that it will issue a proposal on decentralized finance tools this year. However, it is unclear whether the proposal will require firms to disclose whether AI/ChatGPT was used while providing advice or reports to customers. Currently, clients do not have a legal right to know whether AI was used to generate the research report. But risks would arise if the client was misled or deceived about how the AI-powered application was used. Wilson-Bilik, a partner at the law firm Eversheds Sutherland, suggests that firms should use and disclose the use of AI in their research products cautiously. By doing so, firms can avoid misleading their clients and encountering any anti-fraud statutes.
It is advisable to check AI tools for accuracy and bias. Without robust guardrails, firms could face regulatory action or litigation.