Is ChatGPT responsible for insecure code?

Published On Sat May 13 2023
Is ChatGPT responsible for insecure code?

ChatGPT writes insecure code, research finds

A recent study by computer scientists at the University of Quebec in Canada has found that ChatGPT, OpenAI's popular chatbot, generates insecure code. The research paper entitled "How secure is code generated by ChatGPT?" by Raphaël Khoury, Anderson Avila, Jacob Brunelle, and Baba Mamadou Camara, concludes that despite its awareness of vulnerabilities, ChatGPT's generated code is not robust enough to meet even minimal security standards in most contexts. When asked if the code produced was secure, the bot acknowledged that it was not. However, the chatbot was able to provide a more secure version of the code when explicitly asked to do so.

The researchers assumed the role of a novice programmer who did not have security in mind, and asked ChatGPT to generate code. Although the AI bot was not specifically asked to create secure code or include certain security features, the researchers found that ChatGPT generated 21 applications written in five programming languages: C, C++, HTML, Java, and Python with the longest program being 97 lines of code.

The study found that on the first run, ChatGPT produced only 5 secure applications out of the 21 generated. However, when asked to make changes, the chatbot was able to create seven more secure applications from the remaining 16. The AI bot only generated "secure" code when the user explicitly requested it. For instance, when asked to create a simple FTP server for file sharing, ChatGPT generated code without applying input sanitization. ChatGPT only added the security feature after the researchers prompted it to do so.

The research concluded that ChatGPT does not assume an adversarial model of execution, making it impossible for the AI bot to create secure code by default. Although the chatbot would readily admit to errors in its code, the researchers pointed out that this explanatory benefit would only be available to security-conscious programmers who query ChatGPT about security issues.

With the possibility of students and programmers using this tool in the wild, the researchers expressed concerns about this discovery. "Having a tool that generates insecure code is really dangerous," one of the Université du Québec researchers said in an interview with The Register. "We need to make students aware that if code is generated with this type of tool, it very well might be insecure."