
Artificial intelligence (AI) is making its mark in many ways. While the hype has been deafening, large language models (LLMs) such as ChatGPT have begun to demonstrate just a little glimpse of the potential of generative AI (GenAI). That potential is huge; for example, UC San Diego predicts that AI will enable us to manage chronic health conditions, power self-driving vehicles for delivery and transportation, enhance the prediction of ‘atmospheric rivers,’ and perform automated lifesaving emergencies, among other applications.
But with the good, inevitably comes the bad. Malicious actors are also leveraging the power of AI. An early example is GenAI phishing. Phishing is nothing new: Emails from Nigerian princes looking to park millions in your bank account for a healthy cut date back to the early days of email. For many, these phishing emails were easy to see through, as they were riddled with spelling and grammatical errors and opportunities that were just too good to be true. AI phishing changes this by turbocharging the phishing process with clever and sophisticated content that can fool even the most tech-savvy consumer.
According to the 2024 mid-year assessment from cybersecurity firm SlashNext, phishing scammers pocketed over $2 billion in 2022 alone. Since the fourth quarter of 2022—when ChatGPT was launched—there’s been a 4,151 percent increase in malicious phishing emails.
In this blog, we examine how hackers are using GenAI to accelerate more convincing phishing scams. To protect against these attacks, cybersecurity experts must be cognizant of how cybercriminals are exploiting technology and then deploy AI as a defense mechanism.
What Is Phishing?
Phishing is generally believed to have started in the 1990s. Early phishing techniques involved hackers impersonating America Online (AOL) employees to trick users into revealing login credentials and other personal information.
Now, armed with a stolen list of emails acquired from the dark web, hackers send thousands of messages—adding up to 3.4 billion spam emails a day in 2025, according to IT provider AAG. Most are deleted, but occasionally someone falls victim to a scam, perhaps through anxiety or curiosity, and ends up clicking on malevolent URLs, downloading virus-riddled files, or sharing authentication information and personal data that cybercriminals can use to access bank accounts.
Though phishing has been a concern for decades, easily implemented best practices have helped avoid conventional scamming. Spelling and grammatical errors, formatting issues, incorrect names, poorly reproduced company logos, and weird return email addresses all give scammers away. Unfortunately, GenAI is changing all that and these best practices are being challenged.
Phishing with Generative AI
Utilizing GenAI’s deep-learning model, cybercriminals can quickly generate high-quality text, images, and content based on real-time user behavior. GenAI doesn’t change the basic mechanism of a phishing attack; it is essentially a numbers game. Where things are different is that instead of sending millions of randomly targeted emails, AI makes it much faster and more simplified for criminals to launch complex, targeted campaigns that evade detection.
AI has resolved many of the issues that plagued early spam efforts. For example, grammar and spelling are perfect, company logos are reproduced perfectly, and writing styles are compelling and concise.
Attacks can also be timed precisely to catch the attention of the target consumer. For example, LLMs can capture real-time information from content producers, retailers, and news websites to incorporate up-to-date details in the phishing emails. Such details make the messages believable and coerce targets to react to calls-to-action.
AI also allows hackers to try new tricks. Forget emails, how about using GenAI to clone a trusted contact’s voice and create fake audio? What do you do if you receive a voice message from a fake CFO who sounds just like your boss, requesting a money transfer?
It is not as though it is difficult for these cybercriminals to lay their hands on the tools needed to get started. Free hacker tools like the nefarious WormGPT—a free version of OpenAI, which is the basis of the ChatGPT LLM—or paid-for software such as FraudGPT are readily available on the dark web. Both are GenAI tools without safeguards and they are designed to generate requests to create phishing emails, code-to-spoof specific websites, or voice simulation.
AI to Combat the Dangers of Generative AI Phishing
How to fight back then? One way is to leverage enterprise email security that relies on the Domain-based Message Authentication, Reporting, and Conformance (DMARC) security protocol. This verifies the identity of email senders using the Domain Name Server (DNS), Sender Policy Framework (SPF), and DomainKeys Identified Mail (DKIM) protocols. DMARC is particularly useful at preventing scamming of a company’s own domains—a phishing technique often used to deceive employees into revealing sensitive company data, but it is an approach that requires lists of known-bad senders.
A more effective approach is to use AI to detect AI scams. AI tools have proven to work well at spotting AI-powered phishing attempts. Instead of analyzing previous attacks and chasing the hackers, modern tools train themselves on real-time business data such as how employees interact with their inbox. Defensive AI tracks things such as tone, sentiment, content, as well as when and how employees follow or share links. This allows AI tools to maintain context and a deep understanding of what “normal” communication looks like to recognize suspicious activity that may indicate an attack.
Conclusion
Malicious actors are already leveraging the power of Generative AI. An early example is turbocharging the phishing process with clever and advanced content that can fool even the most tech-savvy consumer. Grammar and spelling are correct, corporate style is flawless, and the copy is compelling. Attacks are also able to be timed precisely to catch the attention of the target consumer. LLMs can capture real-time information to incorporate up-to-date details in the phishing emails.
To protect against these targeted phishing attacks, cybersecurity professionals must understand how cybercriminals exploit the technology and then use AI for defensive purposes. Fortunately, modern defensive AI tools develop context and a deep intelligence of what regular activity looks like in order to detect changes that are more nuanced signs of a phishing attempt.
Source: Mouser blog