February 26, 2018

The IEEE International Solid-State Circuits Conference (ISSCC) is the premier global forum for solid-state circuit research in both academia and industry. This year’s conference in San Francisco was the 65th annual, and drew over 3,000 attendees.

Among the show’s highlights were new circuit design techniques and system-on-chip innovations that stand to accelerate on-chip machine learning. And machine learning did indeed take center stage. In his opening keynote, David Patterson from Google and UC Berkeley explained that although we have reached the end of Dennard scaling and Moore’s Law, we can continue to improve computational and energy efficiency by adopting domain-specific architectures.

Image Source:
2018 IEEE International Solid-State Circuits Conference
1.4: 50 Years of Computer Architecture: from Mainframe CPUs to DNN TPUs and Open RISC-V

While these architectures perform only a subset of the instructions from a traditional instruction-set architecture, they do so with much higher efficiency.

Google’s TPU, which performs inference on a deep neural network and supports a growing diversity of artificial intelligence algorithms, achieves up to 30x computational speedup, with up to 80x higher energy efficiency than modern CPUs and GPUs.

Image Source: Google Cloud Platform

From the academic side, researchers from several universities demonstrated in-memory computing techniques and advances in low-power neural network accelerators intended for image classification and hand gesture recognition on smart mobile devices. Such implementations leverage the low-cost and low-power benefits of highly integrated CMOS processors, and will enable contextual user experiences in the coming IoT era.

Image Source:
2018 IEEE International Solid-State Circuits Conference
13.4: A 9.02mW CNN-Stereo-Based Real-Time 3D Hand-Gesture Recognition Processor for Smart Mobile Devices

INTERACTIVE EXPERIENCES

Close Navigation