Google, Microsoft, xAI Submit AI Models for Government Safety Checks

Google, Microsoft, xAI Submit AI Models for Government Safety Checks
Google, Microsoft, and xAI have agreed to share unreleased AI models with the U.S. Department of Commerce's Center for AI Standards and Innovation for pre-release safety testing. The evaluations will focus on cybersecurity, biosecurity, and chemical weapons threats to national security. The move signals a shift for the Trump administration, which previously favored a hands-off approach to AI regulation. OpenAI and Anthropic made similar agreements under Biden two years ago. Reports suggest an executive order may formalize a review process for all new AI models amid growing public concerns over job loss, mental health, and cybersecurity risks.
Read the original article →