Traditional AI training necessitates the cloud or large on-premise data centers and can take days or weeks to complete. Real data, on the other hand, is mostly generated at the edge. Running AI inference and training on the same edge device eliminates not only the total cost of ownership (TCO), but also latency and security bleaches.
Fremont, CA: The field of artificial intelligence is rapidly evolving, and the rate of innovation is only increasing. While the software industry has been successful in deploying AI in production, the hardware industry, which includes automotive, industrial, and smart retail, is still in its early stages of AI productization. There are still significant gaps that prevent AI algorithm proofs-of-concept (PoC) from becoming real hardware deployments. These drawbacks are primarily the result of small data issues, “non-perfect” inputs, and ever-changing “state-of-the-art” models. How can software developers and AI researchers overcome these obstacles? The solution is adaptable hardware.
Every day, internet behemoths like Google and Facebook collect and analyze vast quantities of data. They then use this data to build AI models that perform well right away. In such cases, the hardware used to train the models differs significantly from that used to run the models.
On the other hand, big data availability in the hardware industry is much more limited, leading to less mature AI models. As a result, there is a strong push to collect more data and run “online models,” which perform training and inference on the same deployed hardware to continuously improve accuracy.
To address this, adaptive computing devices, such as FPGAs and adaptable system-on-chip (SoC) devices proved on edge, can run both inference and training to continuously update themselves to newly captured data. Traditional AI training necessitates the cloud or large on-premise data centers and can take days or weeks to complete. Real data, on the other hand, is mostly generated at the edge. Running AI inference and training on the same edge device eliminates not only the total cost of ownership (TCO), but also latency and security bleaches.
While it is becoming easier to publish an AI model proof of concept (PoC) to demonstrate, for example, improved COVID-19 detection accuracy using X-ray images, these PoCs are almost always based on well-cleaned-up input images. Camera and sensor inputs from medical devices, robots, and moving vehicles may have random distortion, such as dark images and different angled objects, in real life. Before they can be fed into AI models, these inputs must first be cleaned up and reformatted using sophisticated preprocessing. Postprocessing is critical for making sense of AI model outputs and calculating appropriate decision making.