AI News: Meta Unveil Framework To Restrict High Risk AI Systems
Meta has introduced a new policy, the Frontier AI Framework, outlining its approach to restricting the development and release of high-risk artificial intelligence systems. The framework addresses concerns about the dangers of advanced AI technology, especially in cybersecurity and biosecurity. According to the AI news, Meta states that some AI models may be too risky to release without internal safeguards.
Classification of AI Systems
In a recent document filing, Meta classified AI systems into two categories based on potential risks: high-risk and critical-risk. High-risk AI models may pose threats in cyber or biological attacks, while critical-risk AI systems could lead to catastrophic consequences.
Security Measures
Meta will halt the development of any system classified as critical risk and implement additional security measures to prevent unauthorized access. High-risk AI models will be restricted internally, with efforts to reduce risks before release. This demonstrates the company's commitment to minimizing potential threats associated with artificial intelligence.
Amid concerns over AI data privacy, DeepSeek, a Chinese startup, has been removed from Apple’s App Store and Google’s Play Store in Italy. The country's data protection authority is investigating its data collection practices.
Risk Assessment
To determine AI system risk levels, Meta will rely on assessments from internal and external researchers. The company emphasizes that expert evaluation is crucial in decision-making, as no single test can fully measure risk. A structured review process outlined in the framework ensures senior decision-makers oversee final risk classifications.
Mitigation Measures
For high-risk AI, Meta plans to introduce mitigation measures before considering a release to prevent misuse while maintaining functionality. If an artificial intelligence model is classified as critical-risk, development will be suspended until safety measures allow controlled deployment.
Open AI Development
Meta's open AI development model has granted broader access to its Llama AI models, resulting in widespread adoption. However, concerns of potential misuse have surfaced, including reports of a U.S. adversary using Llama to develop a defense chatbot. The Frontier AI Framework aims to address these concerns while upholding the company's commitment to open AI development.
In other AI news, OpenAI recently introduced ChatGPT Gov, a secure AI model designed for U.S. government agencies. This launch coincides with DeepSeek's rise and Meta's enhanced security measures, intensifying competition in the AI landscape.