October 9, 2025

Summary:

  • Recent research from the IEEE Computer Society notes that phishing training doesn’t work. 
  • The rise of generative AI is making it harder for individuals to determine what is real and what’s fake. 
  • Ideally, companies would take a layered approach that involves modern training tactics and technology. 

More than 90% of successful cyberattacks begin with a phishing email. Companies spend billions each year on training programs designed to help employees spot and avoid these scams. But new research published by the IEEE Computer Society suggests that phishing training isn’t as effective as many had believed. 

The research focused on the real-world effectiveness of anti-phishing training in the healthcare sector. Researchers found that, on average, users who received routine anti-phishing training were only marginally better at detecting phishing scams than those who did not. In total, the difference was approximately 1.7%. For several phishing campaigns, at least 10% of users in each group failed the simulated attack.

“Experience tells us that click-through simulations and awareness posters alone don’t build lasting resilience,” said IEEE Senior Member Elyson De La Cruz. “This is potentially due to over-exposure to simulations, which may cause alert fatigue, contributing to worse results at spotting real attacks.”  

According to IEEE Senior Member Steven Furnell, the success of phishing training highly “depends on the form that it takes and how much effort is put into planning and supporting the users involved.” 

Requiring employees to conduct online training that discusses threats and tests their ability to spot a few simplified examples isn’t likely to be effective on its own, Furnell said. Neither is running a series of mock phishing campaigns. These programs need internal awareness campaigns to support and reinforce them.  

Additionally, attackers’ growing use of AI is making it more difficult for individuals to discern what’s real and what’s a scam. IEEE Senior Member Vaibhave Tupe advises that “while training can raise awareness, it does not reliably protect organizations against the scale and sophistication of modern phishing attacks.”

Generative AI’s Role in Increased Phishing Scams

According to McKinsey, there has been a 1200% surge in phishing attacks since the rise of generative AI in 2022. For threat actors, AI opens the door to refining their attack strategies in real-time, especially by making phishing attempts look more legitimate. 

IEEE Senior Member, Kayne McGladrey said that “AI-generated phishing removes all the traditional warning signs that training programs teach people to look for.” 

Typical training tells people to watch for bad grammar, weird formatting or implausible scenarios. 

“However, AI can now create emails that are grammatically perfect, properly formatted and believable. It can even personalize attacks using information scraped from social media or data breaches.” 

While AI does make it easier to create highly convincing phishing attacks, IEEE Senior Member Suélia de Siqueira Rodrigues Fleury Rosa says that AI is an opportunity for security leaders and organizations to innovate. 

“The rise of agentic AI isn’t just a threat vector. It’s an opening for interdisciplinary innovation in security education,” she said. “By studying how autonomous systems plan, learn and make decisions, we can build defensive artificial intelligence systems that anticipate attacker moves. Universities and training programs must evolve to cover both the technical and ethical dimensions of AI-powered offense and defense.” 

What Works? 

So if phishing training doesn’t work, then what does? 

“Effective phishing education requires immersive, engaging experiences that make cybersecurity thinking intuitive rather than burdensome,” IEEE Senior Member Shaila Rana said. 

She noted that virtual reality and augmented reality environments can simulate realistic workplace scenarios where employees practice making decisions in safe, sandboxed, consequence-free settings while receiving immediate, constructive feedback. Gamification elements, interactive storytelling and scenario-based learning that adapts to individual roles and risk profiles prove more effective than generic email simulations, she continued. 

“Ideally, future anti-phishing solutions should combine AI technical defenses with human-centered design principles that make secure behavior the easiest option.”

For McGladrey, however, technical defenses need to be the primary strategy.

“We’re moving into a world where even security-aware people can’t reliably tell the difference between legitimate and AI-crafted emails,” he said. 

Ideal Anti-Phishing Solutions 

Moving forward, phishing training should be modernized to address and adapt to ongoing threats, especially as AI usage grows worldwide. 

According to IEEE Senior Member Márcio Andrey Teixeira, “the ideal phishing defense needs to be layered.” This includes baking in advanced AI filters to block malicious messages, strong authentication, such as passwordless logins, to limit damage, and continuous monitoring to detect threats in real time

People and employees remain a critical layer for organizations, as immersive, scenario-based training that reflects modern AI-driven scams is still needed.

“People are often called the weakest link, but the reality is more complex,” Teixeira said. “Phishing training alone is not enough; it is necessary to have technical defenses.”

Want to learn more about trends in cybersecurity? Check out “Meaningful Momentum or Running in Place: Strides in Our Cybersecurity Readiness.”

Interested in becoming an IEEE member?

INTERACTIVE EXPERIENCES

Close Navigation