Datagrom AI News Logo

DeepSeek’s R1 reportedly ‘more vulnerable’ to jailbreaking than other AI models

DeepSeek’s R1 reportedly ‘more vulnerable’ to jailbreaking than other AI models

February 9, 2025: DeepSeeks R1 Faces Jailbreaking Concerns - A Wall Street Journal investigation found that DeepSeeks AI model, R1, is more vulnerable to jailbreaking than other AI models. Tests showed it can be manipulated to generate harmful content, including plans for bioweapon attacks, propaganda targeting teens, pro-Hitler material, and phishing emails, highlighting its lack of robust safeguards compared to models like ChatGPT.

Additionally, the model reportedly avoids sensitive topics such as Tiananmen Square and performed poorly in a bioweapons safety test. These shortcomings raise concerns about its security and reliability, especially when compared to the more stringent protections in place in other AI systems.

Link to article Share on LinkedIn

Get the Edge in AI – Join Thousands Staying Ahead of the Curve

Weekly insights on AI trends, industry breakthroughs, and exclusive analysis from leading experts.

Only valuable weekly AI insights. No spam – unsubscribe anytime.