Legal Knowledge Management and the Rise of Artificial Intelligence
Robotic Refactoring the Workplace
Why Your Next Insurance Claims Processor Could be a Robot
Building an AI Based Machine Learning for Global Economics
The Forgotten Element in Your Big Data Strategy
HK Bain, CEO, Digitech Systems
WiFi Networks: Shifting from Providing a Service to Improving the...
Daniel J. Strojny, Interim Associate Director of Network and IT Operations, University of St. Thomas
Breaking the Stereotypes in the Development of AI
Yves Jacquier, Executive Director, Production Studio Services, Ubisoft
Operationalize Machine Learning
Zongjie Diao, Director of Product Strategy and Management, Data Center Compute Group, CISCO
Thank you for Subscribing to CIO Applications Weekly Brief
AI Accelerators are Revolutionizing Edge Computing!
The integration of software platforms with robust hardware accelerators will enable seamless identification, detection, and classification of data sets at the Edge.
FREMONT, CA – Artificial intelligence (AI) has transformed into a critical driver in the edge computing sector. Edge computing deals with local computing, storage, and processing of sensitive data for the internet of things (IoT) devices. It reduces the latency of cloud dependence, enabling quick response time. The data generated by IoT sensors are aggregated and filtered at the Edge before sending to the cloud. It has allowed enterprises to cut down on bandwidth as well as cloud storage costs.
Edge computing has equipped enterprise with robust capabilities, including local data processing, filtered data transfer, and enhanced decision making. Enterprises are integrating AI capabilities into edge computing to deploy machine learning (ML) models trained in the cloud. The advanced graphics processing units (GPU) have empowered the enterprises to streamline ML training. The modern deep learning frameworks such as TensorFlow, PyTorch, Apache MXNet, and Microsoft CNTK leverage GPU capabilities to accelerate ML training.
However, the capabilities of edge computing are limited when compared to data centers and public cloud infrastructures. Hence, enterprises hesitate to deploy deep learning models at the Edge, which might hinder the classification and prediction stages of training.
Chip manufacturers are developing advanced processors to augment model inferencing. The niche accelerators are designed to enhance the capabilities of edge systems by streamlining the processing of sophisticated mathematical calculations leveraged in deep learning models. Although not as robust as GPUs, the accelerator chips have facilitated faster prediction, identification, and classification of data at the Edge.
To accelerated machine learning workloads, tech giant Google launched tensor processing units (TPUs) to its cloud platform. The cloud TPUs are designed to provide high computational power for training sophisticated machine learning models based on neural networks. The V2 cloud TPUs offer a performance of 180 teraflops, with 64 GB high bandwidth memory (HBM). It has enabled Google cloud platform users to leverage the cloud TPUs from customer VM types to bolster the processor speed, memory, and storage resources for intensive workloads.
Recently, Google also announced Edge TPU. It is designed to complement Cloud TPU. The conversion of TensorFlow models to TensorFlow Lite models will ensure better compatibility with Edge TPU. With the rising implementation Artificial Intelligence at the Edge, the integration of software platforms with robust hardware accelerations will enable seamless interferencing for ML models.