ChatGPT could expose corporate secrets, cyber firm warns
A report from Team8, an Israel-based venture firm, warns that companies using generative artificial intelligence tools like ChatGPT could be putting confidential customer information and trade secrets at risk. The widespread adoption of new AI chatbots and writing tools could leave companies vulnerable to data leaks and lawsuits, said the report. The fear is that the chatbots could be exploited by hackers to access sensitive corporate information or perform actions against the company. There are also concerns that confidential information fed into the chatbots now could be used by AI companies in the future.
Major technology companies including Microsoft and Alphabet are racing to add generative AI capabilities to improve chatbots and search engines, training their models on data scraped from the Internet to give users a one-stop-shop to their queries. The report suggests that if these tools are fed confidential or private data, it will be very difficult to erase the information, and many companies may be putting sensitive data at risk if they use these types of chatbots.
It is important that companies take precautions to protect the data that they are sharing with these chatbots, both to avoid security breaches that could harm their customers and to avoid lawsuits that could damage the company’s reputation.