RISE Seminar 9/13/19: Scalable, Efficient, and Productive: Holistic Hardware Optimizations for Machine Learning Acceleration, a talk by Sophia Shao

September 13, 2019

Title: Scalable, Efficient, and Productive: Holistic Hardware Optimizations for Machine Learning Acceleration
Speaker: Sophia Shao (UC Berkeley)
Date and location: Friday, September 13, 11 – 12 pm, Wozniak Lounge

Abstract: Machine learning systems are being widely deployed across billions of edge devices and datacenter across the world. At the same time, in the absence of Moore’s Law and Dennard scaling, we rely on building vertically integrated systems with domain-specific accelerators to improve the system performance and efficiency. In this talk, I will describe our recent work on building scalable and efficient hardware that delivers real-time and robust performance across diverse deployment scenarios through joint hardware-software optimizations. I will conclude my talk by describing ongoing efforts toward building next-generation computing platforms for real-time machine learning.

Bio: Sophia Shao is an Assistant Professor of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Previously, she was a Senior Research Scientist at NVIDIA. She received her Ph.D. degree in 2016 and S.M. degree in 2014 from Harvard University and a B.S. degree in Electrical Engineering from Zhejiang University, China. Her research interests are in the area of computer architecture, with a special focus on domain-specific architecture, deep-learning accelerators, and high-productivity hardware design methodology. Her work has been selected as one of the TopPicks in Computer Architecture, and her Ph.D. dissertation was nominated by Harvard for ACM Doctoral Dissertation Award. She is a Siebel Scholar, an invited participant at the Rising Stars in EECS Workshop, and a recipient of the IBM Ph.D. Fellowship.