When AI-produced code goes bad
The same generative AI tools that are supercharging the work of both skilled and novice coders can also produce flawed, potentially dangerous code.
Why it matters:
Multiple studies have shown that more than half of programmers are using generative AI to write or edit the software that runs our world — and that number keeps rising.
Catch up quick:
AI coding assistants can do everything from helping developers with drudge work all the way to producing whole codebases from brief prompts.
The big picture:
There haven't yet been any public disasters related to unchecked AI-generated code, but Sloyan said it's only a matter of time.
The other side:
"I think we're some way off from some sort of AI apocalypse," Paterson said. "These tools ultimately are still just tools, and we've got a pretty good understanding of their limitations."
Editor's note: This story has been corrected to reflect that only ChatGPT passed all the ZDNet coding tests, while Google Gemini Advanced — like Meta AI and Meta Code Llama — failed most of them, and only Microsoft Copilot failed them all.