10 Essential Considerations for Developing ChatGPT Policies

Published On Sat May 13 2023
10 Essential Considerations for Developing ChatGPT Policies

Creating the best ChatGPT policies for your organization requires a lot of considerations. HR teams need to ask and answer many questions before setting policies to guide employees' use of OpenAI's ChatGPT and other generative artificial intelligence tools, legal experts say.

As the content-producing technology is growing more popular, companies in multiple industries are excited about its prospects while also curious, if not fearful, about where it can lead. However, before utilizing ChatGPT, organizations need to establish policies that address their risk tolerance, set expectations, and verify accuracy.

Setting expectations and verifying accuracy

Jenn Betts (shareholder and co-chair of the technology practice group at Ogletree Deakins, Pittsburgh), said companies should determine their level of risk tolerance and write policies that clearly spell out expectations. Chances are, conservatively-postured companies will not be comfortable with allowing widespread use of generative artificial intelligence due to questions about the technology, its accuracy, and security. Many organizations have serious concerns related to data security and accuracy.

Even organizations that are willing to embrace generative AI have concerns about preserving and protecting their data, as well as potentially the data of their clients or customers. Employees developing generative artificial intelligence policies typically forbid any confidential information from being uploaded or used as part of their ChatGPT efforts.

Banning the technology may not be the right move, as it may lead to "shadow" ChatGPT usage by employees. A sensible approach is to monitor usage and encourage innovation, but ensure the technology is only used to augment internal work and with properly qualified data, rather than in an unfiltered way with customers and partners.

Fact-checking ChatGPT and employees' knowledge levels

Janice Agresti (associate attorney at Cozen O'Connor in New York City) said that when deciding what course to take in setting a policy, consider including the type of work your organization does; how familiar your workforce is with developing technologies and their limitations; how much of their work is output-driven; and your workforce's areas of expertise. ChatGPT still makes a lot of mistakes, and users need to be able to recognize them.

If the technology user is an expert, then that individual can fact-check the output and modify it according to their expertise. In that case, the individual is more likely to use ChatGPT as a starting point, as opposed to a final output, which poses less reputational, legal, and other risks for a company. Further, if the user is aware of ChatGPT's limitations, then they would take steps to ensure that the output is checked for facts, legality, etc. Again, this would pose less of a reputational, legal, and other risks for a company.

Conclusion

Employers should set guidelines when deciding how much to allow or limit the use of ChatGPT or similar tools. It makes sense to strategize about the sorts of tasks for which the AI might be used, as well as the strengths and limitations of AI tools for that purpose. It’s also good practice to designate certain point people to oversee AI usage and troubleshoot problems as they may arise. As technology continues to advance, employers need to be prepared to be flexible and to adjust their policies accordingly.