AI and machine learning (ML) have become crucial technologies in information security, with security tools analyzing data from millions of cyber incidents, and using it to identify potential threats.
Fremont, CA: AI can serve as a “silver bullet” against the evolving threat landscape with its capability to automate threat detection and response with zero human intervention. According to a recent report, 69 percent of enterprise executives believe that AI is an essential element for responding to cyber threats. However, despite its promise, AI within the cybersecurity landscape should be approached with a discerning eye.
Generally considered a subset of AI, machine learning algorithms build a mathematical model by leveraging the sample data to identify behavioral patterns that detect attacks and make predictions without being explicitly programmed. In the cybersecurity landscape, machine learning is used in several detect and response technologies and are employed in SIEM, EDR, XDR, and sandboxing solutions. Unfortunately, it becomes a problem when machine learning is touted as AI, which goes the extra mile to encompass devices that perceive their environment and take actions that maximize its chance of successfully achieving its goals. It’s essential to keep these anomalies in mind when assessing the capabilities and limitations of AI, and where to focus security program strategy and resources.
The Limitations of AI
There exists a misconception that AI can solve all difficulties pertaining to enterprise security programs. On top of the benefits that AI provides, there is a risk that companies are overly optimistic about what the technology can provide. When leveraged for the right use cases, AI can help security teams eliminate their never-ending cycle of ‘detect – respond – remediate – reprogramme,’ and adopt a more dynamic, proactive, and practical approach to security. However, if businesses invest in AI with the notion that it can fill the resource gap left unfilled by the ongoing cybersecurity skills crisis, then they are sorely mistaken.
Further, the level of the human interface needed for AI tools is significant. AI is not yet capable of preventing any advanced threats, and it is known to give false positives. Also, the technology is not yet able to learn rapidly enough to keep pace with the break-neck speed at which malware evolves. If the technology promises machine learning capabilities, it is wise to investigate whether the solution uses rule-based programming instead of intelligent machine learning algorithms.