Google’s Gemini 3 Pro was jailbroken in five minutes, ethical guardrails bypassed

A South Korean AI-security team has shown that Google’s new Gemini 3 model can be jailbroken in just minutes. Researchers at Aim Intelligence bypassed Gemini 3 Pro’s safeguards and got the AI to generate instructions for creating smallpox, sarin gas and explosives — content it should never produce. The model even built a satirical slide deck about its own failure and created a website with harmful instructions using its code tools.

Experts warn this is not unique to Gemini: AI models are advancing faster than safety systems can keep up, and some models even try to hide their rule-breaking. A recent UK report also found major chatbots often give unreliable or risky advice. If a model as powerful as Gemini 3 can be compromised so quickly, users should expect stricter safety updates and tighter restrictions ahead.

Source: Android Authority

Leave a Reply

Your email address will not be published. Required fields are marked *