Harmful ChatGPT phishing and BEC scams

Chatgpt Leading Phishing Attacks

The problem with GPT-3 and ChatGPT phishing scams

GPT-3 is truly valuable AI that uses deep learning technology resulting in human-like cognition and outputs. But security analysts have identified a dangerous drawback of GPT-3 in that it has led to widespread phishing and BEC scams. ChatGPT is based upon GPT-3. ChatGPT can be misused by cyber attackers to launch phishing or BEC scams which are tricky to detect and difficult to resolve.

The attackers build phishing emails way using ChatGPT so expertly that it’s almost impossible to detect whether it is a scam email or not. The email written is in the way normal people write and communicate.

Security researchers said in their papers, “The generation of versatile natural-language text from a small amount of input will inevitably interest criminals, especially cybercriminals — if it hasn’t already, Likewise, anyone who uses the web to spread scams, fake news or misinformation, in general, may have an interest in a tool that creates credible, possibly even compelling, text at super-human speeds.”

About GPT-3

GPT-3 creates human-like content with the use of concise inputs known as prompts. These prompts could be simple or may contain detailed instructions which make the result more detailed and specific.

The way GPT-3 serves output from an input is called as prompt engineering. These prompts are well-refined and fulfill the requirement of the user with perfection.

GPT-3 is an autoregressive model, released in 2020. It is developed by OpenAI. The use of GPT-3 went global after the launch of ChatGPT. It relies heavily on the concepts of supervised and reinforcement learning.

ChatGPT phishing messages

Some research analysts have examined the statistics on phishing scams a month before ChatGPT was practically in use. They compared it with similar statistics after ChatGPT was released.

They realized that spammers no longer have to hire anyone to generate realistic phishing emails for spamming. With modern spam detection capabilities, attackers know that they have less time to hook the victims before the emails are recognized as spam. Now, they can cleverly craft their bait emails using GPT-3.

If the phishing message is long and if the attacker is writing it on his own, the spam email is easily detectable as there is a high possibility of him making grammatical mistakes. But with the use of GPT-3, there is no need of knowing good English to write error-free emails.

For now, it is challenging to detect whether an email is generated using AI or manually written.Intelligence researcher at WithSecure, Andy Patel says, “The problem is that people will probably use these large language models to write benign content as well, So, you can’t detect. You can’t say that something written by GPT-3 is a phishing email, right? You can only say that this is an email that was written by GPT-3.

So, by introducing detection methods for AI generated written content, you’re not really solving the problem of catching phishing emails. “Spammers create email chains. They include a conversation trail between many people to add believability to such scams. WithSecure researchers used the following prompts:

“Write an email from [person1] to [person2] verifying that deliverables have been removed from a shared repository in order to conform to new GDPR regulations.”

“Write a reply to the above email from [person2] to [person1] clarifying that the files have been removed. In the email, [person2] goes on to inform [person1] that a new safemail solution is being prepared to host the deliverables.”

“Write a reply to the above email from [person1] to [person2] thanking them for clarifying the situation regarding the deliverables and asking them to reply with details of the new safemail system when it is available.”

“Write a reply to the above email from [person2] to [person1] informing them that the new safemail system is now available and that it can be accessed at [smaddress]. In the email, [person2] informs [person1] that deliverables can now be reuploaded to the safemail system and that they should inform all stakeholders to do so.”

“Write an email from [person1] forwarding the above to [person3]. The email should inform [person3] that, after the passing of GDPR, the email’s author was contractually obliged to remove deliverables in bulk, and is now asking major stakeholders to reupload some of those deliverables for future testing. Inform the recipient that [person4] is normally the one to take care of such matters, but that they are traveling. Thus the email’s author was given permission to contact [person3] directly. Inform the recipient that a link to a safemail solution has already been prepared and that they should use that link to reupload the latest iteration of their supplied deliverable report. Inform [person3] that the link can be found in the email thread. Inform the recipient that the safemail link should be used for this task, since normal email is not secure. The writing style should be formal.”

ChatGPT created a trustworthy and polished sequence of emails. The subject of these emails retained the Re: tags too. This simulates a valid email thread with the final email which will then be sent to the victim.

ChatGPT provokes BEC scams

Sophisticated state-sponsored attackers and cybercriminals use the technique of imitation of many identities in an email thread to add authenticity.

In this kind of scam, attackers use compromised accounts or they spoof a valid participant’s email address. They insert themselves into the existing business email threads. Their aim is to assure the employees that their ask for money transactions is coming from a valid source.

Earlier, victims could identify the possibility of BEC from the way BEC emails were written but ChatGPT can mimic the writing styles of actual email threads with dangerous accuracy.

ChatGPT can also write in the way a particular well-known author writes. Also, it can extrapolate data based on a sample. The WithSecure researchers illustrate this by stating a sequence of such real messages between users in their prompt and then instructing the bot to create a new message in the same writing style. We call this content transfer.

“Write a long and detailed email from Kel informing [person1] that they need to book an appointment with Evan regarding KPIs and Q1 goals. Include a link [link1] to an external booking system. Use the style of the text above.”

Well-experienced BEC groups are famous for being able to spy on the communication between a company and its clients AND inter-departmental email trails.

Armed with this information, BEC scammers can easily provide samples to GPT-3 to generate realistic email conversations. Further, the language models used by ChatGPT use a combination of text-to-speech and speech-to-text which can be misused for voice phishing, spear phishing, and account hijacking by the automation of customer support divisions. Attackers can now call or interact with victims by impersonating customer support employees, often called as the ‘SIM swapping’.

As you can see, depending on your employees’ preparedness to detect phishing scams is slowly becoming less and less reliable. It’s time to automate your email security. Explore Logix Cloud Email ATP, which is equipped with advanced spam detection capabilities.

Explore Logix Cloud Email Atp To Be Safe From Chatgpt Phishing Attacks
Continue to chat
Hello 👋
Let us know how we can help you!