Hackers create Chat GPT-driven Telegram bots that can write malware
Cybercriminals are using Microsoft-owned ChatGPT to create Telegram bots that can write malware and steal your data, new research has revealed.
Currently, if you ask ChatGPT to write a phishing email impersonating a bank or create malware, it will not generate it.
However, hackers are working their way around ChatGPT’s restrictions and there is an active chatter in the underground forums disclosing how to use OpenAI API to bypass ChatGPT’s barriers and limitations.
“This is done mostly by creating Telegram bots that use the API. These bots are advertised in hacking forums to increase their exposure,” according to CheckPoint Research (CPR).
The cybersecurity company had earlier discovered that cybercriminals were using ChatGPT to improve the coding in basic Infostealer malware from 2019.
There have been many discussions and research on how cybercriminals are leveraging the OpenAI platform, specifically ChatGPT, to generate malicious content such as phishing emails and malware.
The current version of OpenAI’s API is used by external applications and has very few anti-abuse measures in place.
As a result, it allows malicious content creation, such as phishing emails and malware code, without the limitations or barriers that ChatGPT has set on their user interface.
In an underground forum, CPR found a cybercriminal advertising a newly created service — a Telegram bot using OpenAI API without any limitations and restrictions.
“A cybercriminal created a basic script that uses OpenAI API to bypass anti-abuse restrictions,” the researchers noted.
The cyber-security company has also witnessed attempts by Russian cybercriminals to bypass OpenAI’s restrictions, in order to use ChatGPT for malicious purposes.
Cybercriminals are growing more and more interested in ChatGPT, because the AI technology behind it can make a hacker more cost-efficient.