When everybody's using ChatGPT, how do you know your data is secure?
College students are known for having plenty of questions, and at Babson College, Chief Information Officer Patty Patria understands that students now turn to chatbots for quick answers. As AI tools like ChatGPT become increasingly popular on the school's network, Babson IT relies on Microsoft Defender for Cloud Apps to monitor AI-related activities, such as detecting suspicious prompts and data leaks. Companies can also consider Cloud Access Security Broker (CASB) products to manage and secure data usage.
Ensuring Data Security
Patria expressed concerns about data security, particularly when it comes to tools like ChatGPT. With the rising popularity of ChatGPT among US students and workers, the concern is that data sent to the public version can be used to train models. To address this, Babson opted to ban the use of DeepSeek for staff members and recommended using closed models like Copilot for handling institutional data.

According to Bloomberg, Samsung Electronics prohibited the use of GenAI tools like ChatGPT by employees after sensitive code was found uploaded to the platform. Several financial institutions also imposed restrictions on chatbot usage, highlighting the importance of data security in AI applications.
Risk Management Frameworks
Various frameworks, such as the NIST AI Risk Management Framework, Mitre's ATLAS knowledge base, OWASP's list of GenAI risks, and the ISO 42001 standard for AI management, assist organizations in navigating AI instances. Scott Laliberte, Managing Director at Protiviti, helps clients align their policies with these standards to safeguard intellectual property and prevent data compromise.
College students are known for having plenty of questions, and at Babson College, Chief Information Officer Patty Patria understands that students now turn to chatbots for quick answers. As AI tools like ChatGPT become increasingly popular on the school's network, Babson IT relies on Microsoft Defender for Cloud Apps to monitor AI-related activities, such as detecting suspicious prompts and data leaks. Companies can also consider Cloud Access Security Broker (CASB) products to manage and secure data usage.

Patria emphasized the need for training faculty, staff, and students on the importance of data security when using AI tools. Users are advised to use closed, paid tools when dealing with confidential or proprietary data to ensure that only authorized individuals have access to the information.
Stay informed about the latest trends in cybersecurity, big data, and cloud computing with IT Brew's newsletter, events, and guides.
Industry news By Morning brew Inc.© 2025 Morning Brew Inc. All Rights Reserved.




















