How to protect your Intellectual Property in AI Chatbots

Published On Fri May 12 2023
How to protect your Intellectual Property in AI Chatbots

Cybersecurity Leaders Reflect on Samsung, ChatGPT Incidents

Recently, there have been reports that some Samsung employees entered their corporate information into OpenAI's popular AI chatbot, ChatGPT. This resulted in a breach causing Samsung to warn its executives and employees to exercise caution. Experts in the cybersecurity sectors have weighed in on the security implications of this event.

Using tools that leverage the GPT engine

Cybersecurity experts have warned that companies should use tools that leverage the GPT engine only in ways that mirror ChatGPT. The reason for this is that when data is put onto Open AI’s external servers, there is no way Samsung can retrieve or delete it. Companies must use their own instances where the data is not shared outside the organization.

Human behavior and security

Similar security-related mistakes have been seen with Github, Stack Overflow, Reddit, and even search engines. With that in mind, security leaders can apply the idea of “building it like it’s broken” and presume that similar incidents could happen in their organization. User awareness training, a fresh look at access controls, and education on the risk and limitations of sharing data with AI/ML tooling are suggested. Companies are rapidly rolling out acceptable use policies for AI/ML tooling, so leaders should prioritize this.

Protecting Intellectual Property (IP)

Protecting valuable data is crucial, and cybersecurity experts suggest exploring options that allow companies to leverage a privately trained model so their valuable data is only used by their model. The creators can see all the data entered and use it as the model continues to train and grow. Once the information is part of the next model, it could be possible for the AI to give it to another party. Educating employees on what information is highly sensitive, how to treat that data in regards to humans or computer systems, and investing in non-public models to use for IP protection are some of the suggestions.

Conclusion

ChatGPT is a useful tool, but users should be careful about what they share with it. Companies should only use tools that leverage the GPT engine in ways that replicate ChatGPT. Cybersecurity teams should apply the idea of “building it like it’s broken” and presume that similar incidents could take place in their company. It is a company's responsibility to protect their data, and it is crucial to have policies and training in place to ensure that this is done properly.