SlashNext published a research report detailing a unique module based on ChatGPT that was created by cybercriminals with the explicit intent of leveraging generative AI for nefarious purposes.
The SlashNext team collaborated with Daniel Kelley, a reformed black hat computer hacker who is researching the latest threats and tactics employed by cybercriminals. Delving into cybercrime forums, Kelley and SlashNext uncovered discussion threads wherein bad actors were:
- freely sharing with one another tips for how to leverage ChatGPT to refine emails that can be used in phishing or BEC attacks;
- promoting “jailbreaks” for interfaces like ChatGPT, referring to specialized prompts and inputs that are designed to manipulate interfaces like ChatGPT into generating output that might involve disclosing sensitive information, producing inappropriate content or executing harmful code;
- Promoting a custom module similar to ChatGPT, presented as a blackhat alternative to ChatGPT but without any ethical boundaries or limitations.
These research findings have widespread implications for the security community in understanding how bad actors are not only manipulating generative AI platforms like ChatGPT for malicious purposes, but also creating entirely new platforms based on the same technology, specifically designed to do their ill-bidding.
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideAI NewsNOW
Speak Your Mind