Researchers Discover Security Flaws In Open-Source AI And ML ...
A recent investigation has uncovered over 30 security flaws in open-source AI and machine learning (ML) models, raising concerns about potential data theft and unauthorized code execution, as reported by The Hacker News (THN).
Key Vulnerabilities
These vulnerabilities were found in widely used tools, including ChuanhuChatGPT, Lunary, and LocalAI, and were reported via Protect AI’s Huntr bug bounty platform, which incentivizes developers to identify and disclose security issues.
Lunary Vulnerabilities
Among the most severe vulnerabilities identified, two major flaws impact Lunary, a toolkit designed to manage large language models (LLMs) in production environments.
The first flaw, CVE-2024-7474, is categorized as an Insecure Direct Object Reference (IDOR) vulnerability. It allows a user with access privileges to view or delete other users’ data without authorization, potentially leading to data breaches and unauthorized data loss.
The second critical issue, CVE-2024-7475, is an improper access control vulnerability that lets an attacker update the system’s SAML configuration. By exploiting this flaw, attackers can bypass login security to gain unauthorized access to personal data, raising significant risks for any organization relying on Lunary for managing LLMs.
Another security weakness identified in Lunary, CVE-2024-7473, also involves an IDOR vulnerability that allows attackers to update prompts submitted by other users by manipulating a user-controlled parameter.
ChuanhuChatGPT Vulnerability
In ChuanhuChatGPT, a critical vulnerability (CVE-2024-5982) allows an attacker to exploit a path traversal flaw in the user upload feature. This flaw can lead to arbitrary code execution, directory creation, and exposure of sensitive data, presenting high risk for systems relying on this tool.
LocalAI Vulnerabilities
LocalAI, another open-source platform that enables users to run self-hosted LLMs, has two major flaws that pose similar security risks.
The first flaw, CVE-2024-6983, enables malicious code execution by allowing attackers to upload a harmful configuration file. The second, CVE-2024-7010, lets hackers infer API keys by measuring server response times, using a timing attack method to deduce each character of the key gradually.
Protecting Against Vulnerabilities
In response to these findings, Protect AI introduced a new tool called Vulnhuntr, an open-source Python static code analyzer that uses large language models to detect vulnerabilities in Python codebases.
Conclusion
These discoveries highlight the critical importance of ongoing vulnerability assessment and security updates in AI and ML systems to protect against emerging threats in the evolving landscape of AI technology.