May 14, 2025:
OpenAI Boosts AI Safety Transparency - OpenAI is increasing transparency by regularly publishing AI model safety evaluations via a newly launched Safety Evaluations Hub. This hub provides insights into model performance on tests associated with harmful content, jailbreaks, and hallucinations. The initiative addresses criticism regarding rushed safety tests and insufficient technical reports.
Recently, OpenAI faced backlash over an overly agreeable GPT-4o model update, which was subsequently rolled back. The company plans to implement improvements, including an opt-in alpha phase to gather user feedback before launching new models.