Partagé via Fedilab
@malwaretech@infosec.exchange 🔗 https://infosec.exchange/users/malwaretech/statuses/109814271802732073
Someone on Reddit found a hilarious exploit to bypass ChatGPT's ethics filter and it actually works.
https://www.reddit.com/r/ChatGPT/comments/10s79h2/new_jailbreak_just_dropped/
There are no comments yet.