AI like ChatGPT Fueling Surge in Phishing Emails.
The prevalence of generative artificial intelligence tools, such as ChatGPT, has led to a staggering surge in cyber threats, notably malicious phishing emails.
According to a report by cybersecurity firm SlashNext, there has been an alarming 1,265% increase in these deceitful emails since late 2022. This surge involves explicitly a 967% rise in credential phishing, posing a severe threat to cybersecurity.

The report, drawing insights from both threat intelligence and a survey of over 300 cybersecurity professionals in North America, highlights the concerning trend of cybercriminals exploiting AI tools to craft sophisticated and highly targeted phishing attacks.
These attacks, especially those utilizing Business Email Compromise (BEC), have seen a significant rise in volume, averaging around 31,000 attacks per day.
Patrick Harr, CEO of SlashNext, underscores the role of AI in accelerating the speed, diversity, and effectiveness of cyber threats. He emphasizes how AI technology empowers threat actors to rapidly modify malware code and create numerous variations of social engineering attacks, significantly boosting their success rates.

The financial impact of these phishing attacks is substantial. Harr cites FBI statistics revealing losses amounting to billions of dollars, mainly attributing $2.7 billion to BEC attacks in 2022 alone. This financial incentive drives cybercriminals to intensify their phishing and BEC attempts.
The introduction of tools like ChatGPT has facilitated novice cybercriminals’ entry and equipped experienced attackers with scalable means to execute targeted spear-phishing attacks.
Harr notes that these generative AI chatbots, such as WormGPT and FraudGPT, have lowered the barrier for conducting malicious activities, leading to an alarming proliferation of sophisticated phishing tactics.

Furthermore, SlashNext researchers have identified concerning developments where hackers exploit AI “jailbreaks” to bypass legal constraints on AI chatbots. This tactic converts seemingly innocuous tools like ChatGPT into weapons capable of deceiving victims into divulging sensitive information or login credentials, enabling further breaches.
Chris Steffen from Enterprise Management Associates highlights the transformation of phishing emails from poorly constructed, easily identifiable scams to highly convincing messages.
Cybercriminals now leverage AI to mimic authentic communication styles, incorporating personal details and referencing familiar contexts to increase credibility.

To combat these escalating threats, cybersecurity leaders must adopt proactive measures. Continuous user education is crucial, instilling a security-conscious culture where employees are vigilant and report suspicious activities.
Implementing advanced email filtering tools that leverage machine learning and AI to identify and block evolving phishing attacks is also recommended.
Regular security audits and vulnerability assessments are necessary to fortify systems against potential exploits.
Strengthening existing security infrastructure, coupled with a zero-trust strategy, can mitigate the vulnerabilities exposed by AI-generated email attacks.
In summary, the exponential growth of phishing attacks, fueled by AI-driven advancements, poses a severe threat to cybersecurity. Cybercriminals’ exploitation of generative AI tools demands a multi-layered defense approach and constant vigilance from organizations to counteract these evolving threats effectively.








