July 27, 2023

Jobs in cybersecurity are hot,  and qualified employees are in demand. According to data compiled by the National Institute for Standards and Technology, there was a global shortage of 3.4 million cybersecurity workers in 2022. There were more than  663,000 cybersecurity jobs openings in the U.S. alone. 

The rise of generative AI tools such as ChatGPT has raised new questions about whether these tools could help close the job gap or exacerbate it. As a result, many experts see them as a double-edged sword. 

On the one hand, attackers can use generative AI to “greatly reduce the cost of doing evil in network security,” said Daozhuang Lin, IEEE Senior Member.

On the other hand, generative AI may be able to accelerate the development of software code and new tools to thwart cyberattacks.

“These large language models are a fundamental paradigm shift,” IEEE Member Yale Fox recently told the publication Data Center Knowledge. “The only way to fight back against malicious AI-driven attacks is to use AI in your defenses. Security managers at data centers need to be upskilling their existing cybersecurity resources as well as finding new ones who specialize in artificial intelligence.”

For now, the question over whether generative AI will be a boon or a bane to cybersecurity executives seeking to hire is largely unanswerable. That hasn’t stopped a broader discussion about how it might change the work that cybersecurity professionals do. 

Here’s what the experts are saying about the way generative AI may shape the nature of cybersecurity jobs:

  1. Software developers might have to do more validation and testing on AI-generated code, putting more emphasis on quality control skills. Quality control personnel will need to acquire knowledge and skills related to evaluating and assessing the performance of generative AI models, notes IEEE Senior Member Amol Gulhane.

 “They will need to understand the limitations and biases associated with these models, identify potential vulnerabilities and test the generated outputs against predefined standards and security requirements,” Gulhane said”

  1. Generative AI may reduce the paperwork and documentation burden cybersecurity professionals face. Although much is made of the hands-on-keyboard skills of cybersecurity analysts and their detective powers of finding a needle in a needlestack, less attention is paid to the sheer volume of documentation that the role can require.

“This documentation can take many forms, from writing up their notes on a false positive alert to writing up a playbook for how to conduct a new type of investigation,” said IEEE Senior Member Kayne McGladrey. “While this documentation has tremendous organizational value, it’s also one of the least popular and time-consuming tasks conducted by analysts. By using AI to automate tasks such as summarizing text and generating reports according to predefined formats, analysts can significantly save time.”

  1. It could accelerate the testing of new products for security flaws in which internal security experts try to “hack” a product in a way that developers hadn’t anticipated, a process known as red teaming. 

“Red teamers are occasionally tasked with evaluating the security of a software component as part of an organization’s software development lifecycle,” McGladrey said. “Unfortunately, this comes with schedule pressures; the longer the red teamers take, the longer until the software can be released to general availability, and so not all organizations follow this model.”

Learn more: Ethical hacking helps businesses and software developers see things from the perspective of people  that aren’t so ethical. It’s a vital skill that improves the security of products before they are released to the public. Learn about how ethical hacking can fight cybercrime on the Innovation at Work blog.

INTERACTIVE EXPERIENCES

Close Navigation