Security and Privacy Issues with GPT-4
OpenAI’s recently launched GPT-4 is a major improvement over its predecessor GPT-3 and even ChatGPT. However, GPT-4 still has several security and privacy issues that have been carried over from ChatGPT, and some of them have even been enhanced.
Security Issues with GPT-4
Many security experts have reported that ChatGPT’s security issues persist in GPT-4, and its social engineering capabilities have only improved, making it capable of creating more natural phishing emails and conversations.
For instance, researchers were able to trick ChatGPT into creating ransomware and obfuscating malware by bypassing its guardrails with simple deceptive wording. ChatGPT was also jailbroken from its security controls by using an alter ego. While this jailbreak won’t work in GPT-4, it can still be jailbroken by getting it to enter developer mode.
Moreover, OpenAI suffered a data breach earlier this year that affected approximately 1.2% of the ChatGPT Plus subscriber information. The breach exposed sensitive information like user names, emails, payment addresses, along with the last four digits of credit card numbers and expiration dates. This breach was due to a bug in the Redis open source library, which OpenAI quickly fixed.
Another problem is that company employees using generative AI systems like GPT-4 can lead to privacy and data residency requirement issues. This was recently highlighted when Samsung employees allegedly used proprietary data while using ChatGPT, which could have posed serious problems.
Peril of OpenAI’s Plugin System
OpenAI’s plugin system that allows third-party integrations of GPT models into other platforms can also pose significant risks. There is a risk of malicious developers building plugins for the GPTs that undermine the security posture or weaken the system’s capabilities to respond to user questions.
Possible Solutions to Mitigate Risks
Several companies and even some countries have restricted or banned the use of ChatGPT and other generative AI tools due to security concerns. However, generative AI has significant benefits, including processing vast amounts of data, improving interactions with customers, and even writing code.
Therefore, there is a need to find a balance and implement approaches that help mitigate the potential risks. Companies that use generative AI systems should treat OpenAI as a third-party vendor that needs to be thoroughly vetted. Contracts that define the relationship and security service level agreements between the enterprise and OpenAI should be drafted.
Data classification standards should also include what types of data should never be shared with third parties to keep the AI model from leaking sensitive information or disclosing company secrets. When using the GPT-4 API, the client software should be securely programmed, and the application developers should ensure that no secrets are stored or logged locally.
The OWASP API Top Ten system can also be used to manage generative AI by dealing with vulnerabilities like injection and cryptographic failures. Companies utilizing the GPT-4 API should verify the code before using it in production.