Democratizing Machine Learning Algorithms for Integrated Data-Sharing
The Tango of AI and Big Data
Transforming the Art Museum in the 21st Century.
Explainable AI And The Future Of Machine Learning
Optimal Healthcare Strategy Design In The Digital Era
Ingrid Vasiliu-Feltes, Chief Quality And Innovation Officer, Mednax
Machine Learning And Its Potential Disruptions And Transformations
Sangeeta Edwin, Vice President, Data, Analytics & Insights, Rockwell Automation
AI Summers And Winters And What They Teach Us About The Future
Andreas Merentitis, Director Of Data Science (Global), Olx Group
Artificial Inteligence And The Lost Art Of Auscultation
Edward Kersh, Medical Director, Sutter Care
Thank you for Subscribing to CIO Applications Weekly Brief

Common Problems You May Face With Machine Learning Models

Correct business judgments can be about which model functions appropriately, and poor models can get readily and quickly dismissed.
Fremont, CA: Organizations are constantly battling to put their machine learning models into production and use them to improve their operations. Data scientists create machine learning models, but they are often uninformed of the production elements of deploying or scoring such models. They usually avoid interfering with manufacturing if something goes wrong, and it is also not usually their role to do DevOps duties such as model deployment. These DevOps activities, as well as the work of data scientists, have traditionally been separated.
With all of this going on in the background, consider a few of the issues that machine learning models face in production.
Periodic Redeployment of Machine Learning Models
Because machine learning models degrade with time, they must get deployed repeatedly, and it's in direct opposition to the software engineering principles followed by software engineers. In their case, code that gets deployed once is fine for all time, and only when the code gets upgraded does it need to be redeployed. However, machine learning models may lose their usefulness with time, and it must get addressed during the model's lifespan and requires regular monitoring.
All About the Monitoring
In contrast to software engineering code, monitoring machine learning models may need more work. Because these models are trained on data and subsequently deployed, the data must be precise and free of any unknown abnormalities. Most of the time, tracking must get established for incoming feature vectors to detect drift, bias, or abnormalities in data. Keeping this in mind, it is also critical to have data monitoring and notifications.
Managing Similar Kinds of Models
It isn't easy to find a technology on the market today that allows many models to deploy simultaneously and compare how they behave on the same data. Its currently exceedingly complex, tedious, and difficult to implement. If a single machine learning model gets deployed manually, the work required to install and compare several models doubles and becomes almost impossible. Observing monitoring metrics for several models on production data is a useful tool for selecting appropriate models. Correct business judgments can be about which model functions appropriately, and poor models can get readily and quickly dismissed.
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info
Featured Vendors
-
Jason Vogel, Senior Director of Product Strategy & Development, Silver Wealth Technologies
James Brown, CEO, Smart Communications
Deepak Dube, Founder and CEO, Datanomers
Tory Hazard, CEO, Institutional Cash Distributors
Jean Jacques Borno, CFP®, Founder & CEO, 1787fp
-
Andrew Rudd, CEO, Advisor Software
Douglas Jones, Vice President Operations, NETSOL Technologies
Matt McCormick, CEO, AddOn Networks
Jeff Peters, President, and Co-Founder, Focalized Networks
Tom Jordan, VP, Financial Software Solutions, Digital Check Corp
Tracey Dunlap, Chief Experience Officer, Zenmonics