July 25, 2024

You can identify the weakest link in every organization’s cybersecurity defense. It isn’t the firewall, the intrusion detection system, or any other technical assets that cybersecurity professionals rely on. 

The weakest link in cybersecurity is always human. People reuse the same passwords over and over. They procrastinate when it comes time to update software. They click on malicious links that help intruders gain access to networks.

Whenever possible, cybersecurity executives have tried to remove people from the equation with technology. That’s one reason for the rise of automatic software updates; many people simply won’t take the time, no matter how minimal.

The rise of generative AI may make the weakest link even weaker. That’s putting renewed emphasis on employee awareness training to protect themselves – and the organizations they work for – from scams. 

“AI has made phishing scams much more deceptive and the public more vulnerable. The Federal Trade Commission reported that impersonation scams stole more than $1.1 billion in 2023, three times more than impersonation scams in 2020. These impersonations are being supported by AI,” said IEEE Member Rebecca Herold. “These are also tactics that are very hard for current technologies to spot. Because of this reality, businesses must provide regular training to their workers and contractors to understand AI attack indicators and to be aware of the latest tactics to help protect the businesses’ workers, customers, intellectual property and reputation.”

Social Engineering

Have you ever gotten an email that looks like it’s from your bank, except it is full of typos and misspellings, letting you know that something is off? Or a phone call from the tax authorities demanding immediate payment of a tax debt that can only be paid over the phone? What about a social media message from a friend claiming to be stranded in a foreign country and needing money, when you know they are not in that country? 

These are all forms of social engineering, and they are used quite frequently in certain cyber attacks. 

In phishing attacks, for example, attackers will send emails with links to fake web pages for popular companies to trick people into entering their login credentials. But the email doesn’t have to come from a popular company. It can come from your boss or your boss’s boss. And it doesn’t have to be an email. Phishing attacks, in various forms, can include text messages. Thanks to generative AI, attackers can imitate the sound of another person’s voice over the phone or even their image over video conferencing. 

Knowing the Target 

Many phishing campaigns are little more than “fishing expeditions.” The attackers send tens of thousands of fraudulent emails, hoping one or two people will take the bait. 

Spear phishing involves targeted attacks on a specific person or group. It involves a significant amount of research ahead of time to improve their chances of success. Artificial intelligence can help them learn about important relationships in a company, such as who reports to who, and what projects people work on. 

IEEE Senior Member Marcio Teixeira imagines attackers targeting a financial institution. 

“Attackers could use AI to analyze social media and professional sites, gathering details like job titles and personal interests,” explained Teixeira. “With natural language processing, they could create emails that mimic the style of company executives. These emails might reference recent company activities and include links to a realistic but fake company portal asking for login details. Some emails might also attach malware that steals sensitive data once opened. The AI refines these phishing emails based on how recipients respond, making each new attempt more likely to succeed.”

AI Produces Smarter Cyberscams

The use of generative AI might make it harder to spot phishing attempts for a very specific reason. Phishing attempts in the past tended to be riddled with misspellings and grammatical errors. However, generative AI can turn out grammatically correct writing free of misspellings. 

Teixeira said there are a number of steps companies can take to improve their defenses. Training, he said, can provide education on phishing tactics and the latest AI-generated scams to recognize signs of phishing, such as requests for money or sensitive information. Employers can also adopt simulated phishing exercises to help employees practice identifying and reporting attempts. 

Lastly, companies need to have clear reporting mechanisms to encourage employees to report suspicious emails immediately. 

“In the context of increasing sophistication in cyber threats driven by AI,” he said, “clear communication and frequent training are two key components of an effective security awareness program.”

Learn more: Simulated phishing exercises are one of the leading ways companies test whether their employees will click on suspect links. Researchers wanted to know how they could make phishing simulations more realistic. Find out what they found in this report on IEEE Xplore

INTERACTIVE EXPERIENCES

Close Navigation