Reality check: What impact does GenAI actually have on phishing?
Phishing attacks and email scams use AI-generated content tested with anti-AI content solutions.
Evidence of this is provided by the report from cybersecurity provider Abnormal Security, AI Unleashed:
5 Real-World Email Attacks Likely Generated by AI in 2023. The results show that the likelihood of emails
being written by AI today is very high.
The analysts make this clear using a color scheme and provide numerous examples. The encoding there
indicates how predictable the next word is based on its context on the left. The colored words mean
they are among the top 10, 100, or 1000 predicted words coming from the AI. The coloring is pretty
damning because it's unlikely that a human would be predictable enough to compete with LLM-based AI
tools. The report also points out the lack of typos and grammatical errors, meaning these emails are
more convincing and difficult to identify as being written by a social engineer.
In general, phishing trends are always seasonal. At the beginning of the year it's the subject lines about
tax returns, then it's the summer holidays and at another it's the holidays. The use of AI is not yet
necessarily evident here. However, there are spectacular cases in which AI is used for social engineering.
Current examples include the $25 million CFO-Fraud at a company in Hong Kong using deepfakes. Spear
phishing or voicefakes were also used. However, these remain exceptions if the attackers are very
targeted.
The situation is different with mass phishing emails. KnowBe4's analysis shows that these phishing
attempts are characterized by longer emails with several paragraphs. Popular subject lines that work
here are usually policy updates or announcements. LLMs can help with these types of phishing emails.
However, cybercriminals can access these types of emails from compromised mailboxes. So the ability to
write long emails automatically is not that new and useful for them. Additionally, LLMs are less useful
for shorter emails that mimic spontaneous, informal conversations that typically begin with quick, short
questions and requests, as opposed to multi-part essays. Ultimately, criminals already have a large
collection of these emails. Because they have perfected this type of social engineering in recent years.
Using an LLM for these types of interactions seems too formal and stiff. Mini-essays are simply too much
of a good thing for this type of communication.
In summary, LLMs have some benefit for cybercriminals, but it appears to be limited for now. The basics
of anti-social engineering training still apply. Still, there's no reason why an AI tool can't be trained to
examine legitimate email content from well-known brands to create near-perfect phishing emails.
Companies must prepare their employees for this. The best way to get there is with modern forms of
security awareness training, with a mix of simulated phishing and interactive training content that can
be accessed at any time via a central platform.