Are cybercriminals becoming even more dangerous with ChatGPT?

Are cybercriminals becoming even more dangerous with ChatGPT?

22. February 2023 0 By Horst Buchwald

Are cybercriminals becoming even more dangerous with ChatGPT?

Berlin, February 22, 2023

Can ChatGPT do more for cybersecurity in the future, or do cybercriminals now have a tool they can use to achieve their goals even more effectively?

First of all, it is important to note that the OpenAI Terms of Service for ChatGPT specifically prohibit the generation of malware, ransomware, keyloggers, and viruses. That doesn’t stop the “real pros” though. So it was no surprise that the underground forums filled up pretty quickly with tips on how to use ChatGPT to compile malware, phishing emails and more. The security company Checkpoint analyzed the results and came to the conclusion: AI tools will not revolutionize cyber attacks, i.e. conjure up completely new types of attacks. But they could help run malicious campaigns more efficiently.

Phishing attacks are the most common component of malicious hacking and scam campaigns. Whether attackers are sending emails to distribute malware or phishing links, or to convince a victim to transfer money, the email is the key tool in the initial coercion.

This reliance on email means gangs need a steady stream of clear and usable content. In many cases – especially with phishing – the aim of the attacker is to persuade a person to do something they would not or normally not do – e.g. transfer money. Fortunately, many of these phishing attempts are currently easily identified as spam. But an efficient automated copywriter could make those emails more persuasive.

Cybercrime is a global industry, with criminals in all sorts of countries sending phishing emails to potential targets around the world. This means that language can be a barrier, especially for the more sophisticated spear phishing campaigns that rely on victims believing they are dealing with a trusted contact. However, he won’t accept this if he’s talking to a colleague who has problems – e.g. with the language, or if the emails are full of uncharacteristic spelling and grammatical errors or odd punctuation. Therefore the hint: Russian criminals have difficulties with the English language.

But if the AI is properly exploited, a chatbot could be used to write text for emails in the attacker’s desired language.

Theoretically, there are protective measures to prevent misuse. For example, ChatGPT requires users to register an email address and also requires a phone number to verify registration.

And while ChatGPT refuses to write phishing emails, it is possible to ask it to create email templates for other messages that are commonly exploited by cyber attackers. These efforts may include reports such as For example, an annual bonus is being offered, an important software update needs to be downloaded and installed, or an attached document needs to be viewed urgently.

Also, email is our number one productivity tool. This is why phishing is so dangerous for everyone.


Hits: 5