Samsung Bans ChatGPT Among Employees After Sensitive Code Leak
Samsung Electronics has banned the use of ChatGPT and other AI-powered chatbots by its employees, following concerns about the leak of sensitive internal information on such platforms. The ban was prompted by the discovery of an accidental leak of sensitive internal source code by an engineer who uploaded it to ChatGPT last month. Samsung subsequently issued a memo last week banning the use of “generative AI” tools.
There are concerns that data shared with AI chatbots get stored on servers owned by companies operating the service like OpenAI, Microsoft, and Google, with no easy way to access and delete them, and could end up being served to other users. Samsung, however, is not the only tech giant to crack down on the use of ChatGPT and similar tools among employees; Amazon has also issued a similar warning to its staff. In February, JPMorgan Chase heavily restricted the use of ChatGPT by its staffers amid concerns that it may face potential regulatory risks concerning the sharing of sensitive financial information. Soon, other major U.S. banks, including Bank of America, Citigroup, Deutsche Bank, Wells Fargo and Goldman Sachs, followed suit.
Despite lingering concerns, several workplaces have begun to integrate generative AI tools into their workflow. Management consulting firm Bain & Company earlier this year announced it was integrating OpenAI’s generative tools into its management systems. On Monday, IBM CEO Arvind Krishna said the company will stop hiring humans for jobs that AI tools can do.