Since the release of OpenAI's ChatGPT in December 2022, many individuals and organizations have signed up to use the powerful AI-powered chatbot to handle their questions and requests for information. While the use of ChatGPT could result in efficiency gains and cost savings for businesses, organizations must address and manage several issues that arise with its use.
Overview of ChatGPT
OpenAI's ChatGPT is similar to prior AI in its development and structure. The large language model(s) underneath GPT consist of complex artificial neural networks. What sets ChatGPT apart is the vast size of GPT-3 and GPT-4's training corpus and the special mathematical features of the individual ANN nodes called "transformers" that make them particularly adept at analyzing and generating language. The sheer efficacy of GPT in analyzing, generating, and predicting language has resulted in its rapid adoption by organizations worldwide.
Issues to Watch Out for When Using ChatGPT
- Confidentiality of Input: Employees who use ChatGPT could input up to 25,000 words worth of prompts. OpenAI's terms of use give them the right to use that input content to develop and improve their services. While there is an option to opt-out of use for these purposes, it is not clear whether the input data is still retained. This could lead to disclosures of your business's confidential information and breaches of contractual duties of confidentiality to third parties.
- Incorrect or Misleading Outputs: ChatGPT generates content based on the data sources it was last trained on and the algorithm. Using the output of ChatGPT without a framework for benchmarking the quality of the input and the accuracy of the output is a leap of faith. The output should not be utilized unless reviewed by someone who possesses domain expertise in the subject matter and understands how the model works.
- Biased and/or Offensive Outputs: ChatGPT is trained on real-world data, which reflects the biases, inequalities, and offensive conversations and content present in it. OpenAI researchers have set rules that are meant to weed out such content. However, the subjectivity of the determination means it will never satisfy everyone.
Organizations that use ChatGPT must have a comprehensive corporate policy that makes employees aware of the uncertainties of how input prompts may be handled. Such a policy should ban the use of personal information and any client or confidential information in input prompts. Additionally, outputs from ChatGPT must be verified by a subject matter expert to ensure that it is accurate, and the use of the output must be within a benchmarking framework. Lastly, organizations must be mindful of the potential for biased and/or offensive outputs, and the need to address these promptly.