Safety Research

Can your AI be broken?

52 models tested across 7 escalating jailbreak levels — from basic prompt injection to advanced multi-step attacks.

Tested52
Resisted9
Avg BreakL3.0
Levels
Safe
Danger