Efficient hardware implementation of deep neural network processing Transcript

Posted:
12 Nov 2018
Authors:
Marian Verhelst
Page/Slide Count:
Pages: 4
Abstract
Deep learning comes with significant computational complexity, making it until recently only feasible on power-hungry server platforms. In the past years, we however see a trend towards embedded processing of deep learning networks, with several deep learning accelerators appearing academically and commercially. This talk will give an overview of the various techniques used in such designs to improve deep neural network inference efficiency and throughput. Further, the talk dives into the need co-optimization between algorithms and implementation architectures to obtain solutions which are not just efficient from a hardware point of view, but also from an application or system level viewpoint.

Speaker Biography
Marian Verhelst is an associate professor at the MICAS laboratories (MICro-electronics And Sensors) of the EE Department of KU Leuven, Belgium, as of 2012. Her research focuses on low-power sensing and processing for the internet-of-things, embedded machine learning, and self-adaptive systems. From 2008 till 2011, she worked in the Radio Integration Research Lab of Intel Labs, Hillsboro OR, doing research on digital assistance of configurable wireless radio front-ends. Marian received a PhD from KU Leuven cum ultima laude in 2008, and was a visiting scholar at the Berkeley Wireless Research Center (BWRC) of UC Berkeley in the summer of 2005.

Marian has a passion for inter-disciplinary collaborations and science communication, is a member of the Young Academy of Belgium, and has published over 60 papers in conferences and journals. She is a member of the ISSCC and DATE TPC, as well as a member of the executive committees of DATE and ISSCC. Marian is an SSCS Distinguished Lecturer, and an associate editor of JSSC.

Pricing:
SSCS Members:
IEEE Members:
Non-members:

Recent Items