Thank you for Subscribing to CIO Applications Weekly Brief
Machine Learning: The Future from the Perspective of Model Building
Markus Levy, Director of Enabling Technology, NXP Semiconductors
Major challenges affecting the Machine Learning space
I’ve been working on machine learning technology for the last couple of years. If you look back even one or two years, many people were already interested in this technology. Rightly so, since machine learning shows a lot of promise for the future; however, many people don’t know how to get started. It’s easy to look at this technology and appreciate the potential value it can deliver. Still, a common challenge many people face is understanding how they can add machine learning capability to a traditional embedded product they are building. Figuring out what type of machine learning capability to be added starts with understanding the intended application. A significant foundation related to machine learning comes from the ability to collect data that you would use to train a model. And once you have the data, then there are open source and proprietary tools available to carry out model training and deployment on the hardware at the endpoints.
Trends Shaping the Industry
On the proprietary side when it comes to tools, there are a growing number of companies providing a wide range of capabilities, some of which become our ecosystem partners. For example, one type of partner makes tools that help customers collect and label their data and then use it for training their models. This type of technology is challenging to build up internally, especially if you’re a small company. On the open-source side, there is a tremendous amount of activity from companies such as Google and Facebook, who provide or sponsor the TensorFlow and PyTorch training frameworks, respectively. Ancillary TensorFlow applications exist as well, such as TensorFlow Lite for mobile phones or embedded devices and TensorFlow Micro for deploying even smaller versions on other types of edge devices.
Another trend is that many third-party companies are developing specialized applications for machine learning. For instance, some companies are experts at gesture recognition.
Everybody can talk about a neural network, but it is essential to understand what it really means and the value it brings to finding other ways of solving problems
A strategy that is Steering your Business Growth
The main driver for us is figuring out how to make open-source technologies easier for our customers to use. The NXP eIQ Machine Learning Software Development Environment is continuously expanding to include model conversion for a wide range of NN frameworks and inference engines, such as TensorFlow Lite and Glow (the PyTorch Compiler). There are also open-source technologies from Arm, such as Arm® NN, that will enable higher performance machine learning on ArmCortex Aprocessors. We are even using open-source inference engines to enable machine learning accelerators in our devices. Case in point is our new device called the i.MX 8M Plus. This is our first applications processor featuring an integrated machine learning accelerator that delivers two to three times more performance than NXP devices without it. And, integrating higher performance machine learning capability with acceleration is one of the emerging trends in the industry.
The problem is that machine learning, or AI in general, is such a fast-growing area. The good and bad is that there have been far too many different technologies to keep up with and for us to support. Moving into the future, the technology around today will either be merged or we’ll start to see more de facto standards. For instance, TensorFlow is something that’s not going to go away and represents a significant share of the machine learning developers. On the other hand, PyTorch has quickly been gaining in popularity, especially in the academic community. Other similar technologies created with a specific purpose in mind may be useful, but industry adoption is low. These outliers may merge or disappear in the future. This is perhaps one of the main trends that I see moving forward.
A few years down the road, machine learning will become a de facto standard, and you’ll see it implemented in a majority of devices because people will realize that it’s not magic–and the good tools that are already available to make it work are getting better. And, you don’t have to be a data scientist or an expert in understanding neural network technology to integrate machine learning into your platform. And that’s one area where we also spend a lot of time at NXP -- how do we make it easier for customers to deploy their machine learning models on our devices. We see both performance improvements and memory size reductions as the technology is becoming more optimized, so that’s going to be a significant way forward.
Piece of Advice
As previously mentioned, we have developed a technology called eIQ™ for edge intelligence. I encourage people to check it out, try walking through some of the application examples, and experience machine learning in action. Like most of us, if you’re trying to learn more about this technology, there are many good YouTube videos and an abundance of articles – you just have to spend the time filtering through them. But you can learn a lot by what people have posted online: everything from the basics of what is a neural network, how to train a neural network, how to make it more performance efficient and more accurate, and so on. There’s plenty of information available for people who are starting. One exciting thing about machine learning, which applies to other technologies as well, is that the more you learn about it, the more you realize you don’t know. Everybody can talk about a neural network, but understanding what it really means and its value in solving problems is essential to unlocking machine learning’s extraordinary potential.
Check out: Top Machine Learning Companies