AI models face rising threats from cybercriminal innovation
Chatbots are useful tools that can be creatively utilized. However, they also present an opportunity for cybercriminals, who already exploit DeepSeek, a Chinese LLM competing with ChatGPT.
Artificial intelligence offers new possibilities, as AI tools can streamline work. But these benefits come with risks. It's no surprise that creators of tools like ChatGPT are continuously working to improve security systems. However, the creators of Qwen and DeepSeek are currently taking a less rigorous approach to this issue; for example, the latter language model can even advise on committing theft.
Hidden online forums are teeming with manuals explaining how to use modern AI to produce harmful content, bypass security mechanisms, and break protective systems. Cybercriminals have already developed detailed instructions on jailbreaking—a technique that removes restrictions imposed on AI models, allowing them to generate uncensored content.
New AI tools targeted by hackers
Experts from Check Point Research have highlighted four ways hackers exploit new AI models. Cybercriminals use Qwen to create advanced software to steal confidential data. This allows them to capture information about payment cards, login credentials, and user passwords, which they trade on the black market.
Techniques like "Do Anything Now" or "Plane Crash Survivors" enable cybercriminals to manipulate AI models, forcing them to create usually blocked content. As a result, artificial intelligence can support the writing of malware and aid in preparing attacks on computer systems.
Artificial intelligence vs. banks
New attack methods allow cybercriminals to bypass banks' anti-fraud systems. Experts from Check Point Research have determined that hackers are sharing methods to intercept transactions and circumvent security measures in financial institutions.
Cybercriminals leverage the capabilities of ChatGPT, Qwen, and DeepSeek to enhance spam scripts. Consequently, their activities become more effective and harder to detect by spam filtering systems.
It is evident, therefore, that these new tools present both opportunities and threats globally. Technology companies must introduce effective protective mechanisms to curb the growing wave of threats.