May 15, 2018

Just before the new year, IEEE brought together 19 artificial intelligence (AI), machine learning (ML) and cybersecurity experts for a summit to help chart the course of these three disciplines. Their challenge was to answer this question: Given the rapid evolution of AI/ML technologies and the enormous challenges we all face with respect to cybersecurity, what is needed from AI/ML, where can it be best applied, and what must be done over the next 10 years?

Each installment of this three-part series will look at a key part of the trend paper that came out of the meeting. The first: building trust in both cybersecurity technologies and in humans (yes, humans).

As more of our crucial systems have become reliant on the internet, the potential number of cybersecurity attack surfaces has grown accordingly. While artificial intelligence is a relatively new tool in many respects, its newfound capabilities are making it increasingly useful in defending devices and networks (detecting malicious events is just one example).

And yet, since it’s therefore connected, it’s also vulnerable.

The ways in which we protect AI require a fundamental admission: “Building needs to be accomplished with attackers in mind. As with all code, the question related to an AI/ML security compromise is ‘when’ and not ‘if.’”

So, the need for rigorous training and measurement systems emerges. These systems can help avoid compromises in AI/ML software, and when a breach inevitably takes place, they can greatly increase the odds that AI and ML will “fail well.” Both are important factors in maintaining our long-term confidence in the technology.

It’s tempting to imagine a situation in which AI simply takes over the management of cybersecurity from humans. However, the experts at the session were adamant that this is not something to strive for. “Computers and humans can jointly defend against attackers better than either can on their own,” was one conclusion.

One of mankind’s roles here is to effectively train AI used for security. “All training data are not equal,” said the panel. “Builders of AI/ML systems […] should articulate why they are confident that the sample used for training data is, in fact, accurately representative of the entire population of real-world deployment situations that the AI/ML system is likely to encounter.”

And it’s important to remember that not all systems will be secured using AI – “the more sensitive the deployment context, the more important it becomes to retain human oversight as a part of the decision loop. Some contexts may even prove too fragile for the use of AI/ML.” By regularly performing audits and engaging with standards, we can help ensure trust in the systems, as well as in the people responsible for them.

In some cases, having AI and ML run in a closed loop will be just fine. But by and large, “While understanding and trust may grow on a societal level to eventually allow AI/ML to make response decisions, humans must always have a way to veto those decisions.” Together, we can make the future of computing more secure and trustworthy.

INTERACTIVE EXPERIENCES

Close Navigation