On May 2, RISELab and the Berkeley DeepDrive (BDD) lab held a joint, largely student-driven mini-retreat. The event was aimed at exploring research opportunities at the intersection of the BDD and RISE labs. The topical focus of the mini-retreat was emerging AI applications, such as Reinforcement Learning (RL), and computer systems to support such applications. Trevor Darrell kicked off the event with an introduction to the Berkeley DeepDrive lab, followed by Ion Stoica’s overview of RISE. The event offered a great opportunity for researchers from both labs to exchange ideas about their ongoing research activity and discover points of collaboration.
Philipp Moritz started the first student talk session with an update on Ray — a distributed execution framework for emerging Artificial Intelligence applications. Ray, an open source project developed in RISELab, targets distributed Reinforcement Learning applications [1] by providing an execution framework with support for millisecond-level tasks and millions of tasks per second throughput — performance targets needed by RL applications. RL applications are expected to have soft real-time requirements and a large number of dynamically constructed, sample-dependent micro tasks.
Alexey Tumanov continued the discussion by introducing Clipper, a model-serving system that decouples machine learning applications and the models consumed by those applications, allowing each layer to evolve independently. This is particularly relevant and important as the number and the type of models continues to grow. Clipper also heralds a new trend, where more of the computational resources are spent serving the models than training them, which is a surprising recent development that calls for appropriate systems support. Following talks by Wenting Zhang and Joe Near on privacy considerations in data analytics and machine learning, Francois Belletti concluded the session with a talk on deep continuous-time learning.
After a lively poster session with many animated discussions between the members of the two labs, Dawn Song discussed security considerations in machine learning, and Ken Goldberg expanded on robotic applications for real-time systems.
Fisher Yu kicked off the second student session with an overview of BDD’s experience with large-scale driving dataset collection and efficient networks. Michael Laskey discussed scalable off-policy imitation learning via noise injection. Cyprien Noel shared new trends in hardware accelerators and proposed a simple programming model to utilize these processors efficiently. The topic of security in the context of machine learning was picked up again by Chang Liu, who explored the synergy between deep learning and security. Roy Fox wrapped up the second student session with a talk on learning multi-level hierarchical control policies.
The mini retreat sparked ongoing discussion between the BDD and RISE labs, which have already begun to bear fruit. One of the follow-up meetings involved discussions around systems support for the self-driving car project, specifically sensor data streaming, collection, and storage in real time. An ongoing collaboration including Fisher Yu and Xin Wang studies the performance of models trained for self-driving cars with the expectation to dynamically control model batch size and model selection as a function of runtime conditions. The newest thread of collaborative research with BDD is attention-aware sensory data processing and model training. According to Vaswani et al. [2], “attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences.” More collaboration is expected, particularly in the context of ML model inference composition and the application of RL for self-driving cars [3,4].
[1] Robert Nishihara, Philipp Moritz, Stephanie Wang, Alexey Tumanov, William Paul, Johann Schleier-Smith, Richard Liaw, Mehrdad Niknami, Michael I. Jordan, Ion Stoica. “Real-Time Machine Learning: The Missing Pieces”. In Proc. of ACM HotOS 2017, Whistler, BC, May 2017. [ArXiV]
[2] Ashish Vaswani et al. “Attention is All You Need”. ArXiv [v4], Jun 30, 2017.
[3] Darrell Etherington, “Tesla hires deep learning expert Andrej Karpathy to lead Autopilot vision”, TechCrunch, Jun 20, 2017.
[4] Mariusz Bojarski, Ben Firner, Beat Flepp, Larry Jackel, Urs Muller and Karol Zieba, “End-to-End Deep Learning for Self-Driving Cars”, NVIDIA, Aug 17, 2016.