Research Indicates Extended AI Reasoning May Enable High-Success Jailbreaks in GPT, Claude, and Gemini
2025-11-14 05:05

Chain-of-Thought Hijacking exploits AI models’ extended reasoning to bypass safety filters, achieving up to 100% success rates on systems like GPT, Claude, and Gemini. This vulnerability reveals that longer thinking