ChatGPT-Generated Code: A Risk to Your Security
OpenAI's ChatGPT model has brought a new dimension to artificial intelligence as it is capable of generating code. However, recent research conducted by computer scientists from the Université du Québec in Canada has exposed its weakness: the code generated by ChatGPT is often insecure and exposes users to security vulnerabilities.
The study involved asking ChatGPT to generate 21 programs in five programming languages, with the aim of identifying specific security vulnerabilities such as memory corruption, denial of service, and cryptography implementation errors. The results showed that ChatGPT produced only five secure programs out of 21 on its first attempt.
Although the researchers prompted ChatGPT to produce more secure programs, the model still failed to recognize that the code it generated was insecure. The model only provided useful guidance after the researchers prompted it to remediate the problems.
The researchers observed that ChatGPT had several shortcomings. Firstly, it did not assume an adversarial model of code execution, which is essential in identifying vulnerabilities. Secondly, ChatGPT only flagged critical vulnerabilities if asked to evaluate the security of its code suggestions. Lastly, the model recommended using valid inputs only, which is not practical in the real world.
The authors of the study suggested that the lack of response by ChatGPT is because knowing which questions to ask presupposes familiarity with specific vulnerabilities and coding techniques. Additionally, the model's ethical inconsistency is notable since ChatGPT won't create attack codes but will create vulnerable codes.
Overall, ChatGPT-generated code is often insecure and exposes users to potential security threats. It is essential to double-check any code produced by the chatbot for bugs and vulnerabilities.