Datagrom AI News Logo

OpenAI pledges to publish AI safety test results more often

OpenAI pledges to publish AI safety test results more often

May 14, 2025: OpenAI Boosts AI Safety Transparency - OpenAI is increasing transparency by regularly publishing AI model safety evaluations via a newly launched Safety Evaluations Hub. This hub provides insights into model performance on tests associated with harmful content, jailbreaks, and hallucinations. The initiative addresses criticism regarding rushed safety tests and insufficient technical reports.

Recently, OpenAI faced backlash over an overly agreeable GPT-4o model update, which was subsequently rolled back. The company plans to implement improvements, including an opt-in alpha phase to gather user feedback before launching new models.

Link to article Share on LinkedIn

Stay Current on AI in Minutes Weekly

Cut through the AI noise - Get only the top stories and insights curated by experts.

One concise email per week. Unsubscribe anytime.