Democratizing Machine Learning Algorithms for Integrated Data-Sharing
The Tango of AI and Big Data
Transforming the Art Museum in the 21st Century.
Explainable AI And The Future Of Machine Learning
Optimal Healthcare Strategy Design In The Digital Era
Ingrid Vasiliu-Feltes, Chief Quality And Innovation Officer, Mednax
Machine Learning And Its Potential Disruptions And Transformations
Sangeeta Edwin, Vice President, Data, Analytics & Insights, Rockwell Automation
AI Summers And Winters And What They Teach Us About The Future
Andreas Merentitis, Director Of Data Science (Global), Olx Group
Artificial Inteligence And The Lost Art Of Auscultation
Edward Kersh, Medical Director, Sutter Care
Thank you for Subscribing to CIO Applications Weekly Brief

Top Machine Learning Challenges for Threat Detection

While machine learning may significantly improve an organization's security posture, it's important to be aware of some of the issues it faces when developing security policies.
Fremont, CA: Machine learning's advancement and capacity to give deep insights utilizing massive data remains a hot issue. Many C-level executives are launching purposeful machine learning projects to examine how they may improve their organisations, and cybersecurity is no different. Most information security providers have incorporated some type of machine learning, but it's evident that it's not the panacea that some have claimed.
While machine learning (ML) solutions for cybersecurity can and will give a considerable return on investment, they now encounter several obstacles. To reach ML's full potential, organisations should be aware of a few possible hurdles and set reasonable targets.
• Dynamic environments
Before monitoring for unusual occurrences that might signal a breach, ML algorithms understand the environment and create baseline norms. If, on the other hand, the IT company is continually reinventing itself to satisfy business agility demands and the dynamic environment lacks a consistent baseline, the algorithm will be unable to discern what is typical and will give warnings on entirely harmless occurrences.
To reduce the effect, security teams must collaborate with DevOps teams to understand what changes are being made and update their tools accordingly. Because each of these parts should be coordinated and function within a common awareness, the phrase DevSecOps (development, security, and operations) is gaining favour.
• Context
The strength of ML stems from its capacity to create predictions based on enormous multi-variable correlation. When an actual warning arrives in a security analyst's queue, however, this powerful correlation appears as a black box, leaving nothing but a ticket that reads "Alert." After then, an analyst must go through records and events to determine what caused the action.
The easiest method to address this issue is to equip a security operations centre with technologies that can swiftly filter log data on the triggering entity. Artificial intelligence can assist automate and speed up data contextualization in this field. Data visualisation tools may also be useful since they provide a quick chronology of events as well as a knowledge of a specific environment. After that, a security analyst can quickly assess why the machine learning programme sent the warning and whether it is accurate.
• Anti-ML attacks
The ultimate hurdle for ML is hackers who can fast adapt and circumvent detection. When this happens, it may have disastrous consequences, as proven by recent hackers who caused a Tesla to speed to 85 MPH by changing a 35 MPH sign on a road.
The use of machine learning in security is no exception. An ML-network-detection method that employs byte analysis to assess whether traffic is innocuous or shellcode is an excellent example. Hackers immediately reacted by employing polymorphic blending attacks, padding shellcode assaults with extra bytes to change the byte frequency and completely avoid detection methods. It's another another example of how no one tool is impenetrable, and how security teams must regularly analyse their security posture and keep informed about the current attack patterns.
In order to enable and advance security teams, machine learning may be quite useful. The ability to automate detection and data correlation may save security professionals a lot of time.
However, human-machine teaming, in which a symbiotic connection exists between machine (an expanding library of signs of compromise) and man, is the key to a better security posture (penetration testers and a cadre of mainframe white-hat hackers). Humans contribute attributes that ML can't (yet) reproduce, such as logic, emotional reasoning, and decision-making abilities based on experiential knowledge, and ML brings the speed and agility needed to remain ahead of the curve.
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info
Featured Vendors
-
Jason Vogel, Senior Director of Product Strategy & Development, Silver Wealth Technologies
James Brown, CEO, Smart Communications
Deepak Dube, Founder and CEO, Datanomers
Tory Hazard, CEO, Institutional Cash Distributors
Jean Jacques Borno, CFP®, Founder & CEO, 1787fp
-
Andrew Rudd, CEO, Advisor Software
Douglas Jones, Vice President Operations, NETSOL Technologies
Matt McCormick, CEO, AddOn Networks
Jeff Peters, President, and Co-Founder, Focalized Networks
Tom Jordan, VP, Financial Software Solutions, Digital Check Corp
Tracey Dunlap, Chief Experience Officer, Zenmonics