The world is facing a shortage of nearly 3 million cybersecurity professionals. Breaches that expose millions of customer records have become practically a weekly event. Research firm Gartner expects global cybersecurity spending to increase at nearly four times the rate (12.4%) of overall IT spending this year (3.2%). Can artificial intelligence (AI) be the solution to this intractable situation?
Not yet, said Nicolas Kseib, lead data scientist at TruSTAR, maker of a management platform that helps enterprises operationalize security data. While AI is proving to be a strong ally to human security researchers, the technology is still too immature to rely on for the complex decision-making that would alleviate the pressing need for human security professionals.
“Be aware that what is called AI today does not even match the cognitive capabilities of a two-year-old,” Kseib said.
What appears to be intelligent behavior or human-like thinking in AI software or robots is actually “a bunch of models trained on large datasets to perform specific brute-force tasks,” he added.
AI’s Derivatives and Potential
Not that there’s anything wrong with that. AI and its two most common applications, machine learning (ML) and deep learning (DL), are already providing assistance to beleaguered security pros. ML and DL algorithms can process more information and spot more patterns than their human counterparts.
While AI is any activity related to making machines smart, ML is a subset of AI that involves training algorithms so that they can learn and dynamically modify themselves when exposed to more data, without human intervention. DL is a subset of ML that imitates the workings of the human brain. It can process large volumes of unstructured or unlabeled data, unsupervised, using multiple levels of “thought.”
[Related: Nutanix Enterprise Cloud for AI]
By way of example, a strong use case for ML is discovering that certain combinations of prescription drugs are more likely to cause negative interactions in patients with diabetes. DL excels at tasks like voice and face recognition.
Making Sense from Huge Volumes of Security Data
In the security realm, ML and DL are useful at cutting down on the volume of false alerts – events that look like breaches but are in fact benign – that plague intrusion detection systems.
“It’s especially promising in areas where you have a lot of complex data to sort through,” said Sven Krasser, chief scientist at CrowdStrike, a maker of endpoint protection software. “We can see events more clearly and statistically dissect files to decide if they’re good or bad.”
Makers of anti-malware software see potential in ML for moving beyond signature detection to find rogue programs based upon behavior.