An NSF Expedition Project
REAL-TIME INTELLIGENT SECURE EXPLAINABLE SYSTEMS
In the RISELab, we develop technologies that enable applications to make low-latency decisions on live data with strong security.
RISE postdoc Hao Zhang wins Jay Lepreau Best Paper Award at OSDI ’21 for: “Pollux: Co-adaptive Cluster Scheduling for Goodput-Optimized Deep Learning”September 17, 2021
Congrats Hao! RISE lab postdoc Hao Zhang–working with Prof. Ion Stoica–has won the Jay Lepreau Best...
The inside story of how UC Berkeley became the incubator for red-hot enterprise startups Databricks, SiFive, and AnyscaleSeptember 14, 2021
Business Insider reports on how RISELab’s unique research model has sparked a generation of successful...
Dissertation Talk by Devin Petersohn: Dataframe Systems: Theory, Architecture, and Implementation; 3 PM, Monday, August 9
Title: Dataframe Systems: Theory, Architecture, and Implementation Speaker: Devin Petersohn Advisor: Anthony...
Dissertation Talk: Compartmentalizing Consensus (Michael Whittaker); Thursday, July 29, 2021 12:30 PDT
Title: Compartmentalizing Consensus Speaker: Michael Whittaker Advisor: Joe Hellerstein Date: Thursday, July 2...
Next Friday (June 11) at 11AM, we’ll have a security seminar talk with Ethan Cecchetti, who will be pre...
Berkeley’s computer science division has an ongoing tradition of 5-year collaborative research labs. In the fall of 2016 we closed out the most recent of the series: the AMPLab. We think it was a pretty big deal, and many agreed.
One great thing about Berkeley is the endless supply of energy and ideas that flows through the place — always bringing changes, building on what came before. In that spirit, we’re fired up to announce the Berkeley RISELab, where we will focus intensely for five years on systems that provide Real-time Intelligence with Secure Explainable decisions.
RISELab represents the next chapter in the ongoing story of data-intensive systems at Berkeley; a proactive step to move beyond Big Data analytics into a more immersive world. The RISE agenda begins by recognizing that there are big changes afoot:
- Sensors are everywhere. We carry them in our pockets, we embed them in our homes, we pass them on the street. Our world will be quantified, in fine detail, in real time.
- AI is for real. Big data and cheap compute finally made some of the big ideas of AI a practical reality. There’s a ton more to be done, but learning and prediction are now practical tools in the computing toolbox.
- The world is programmable. Our vehicles, houses, workplaces and medical devices are increasingly networked and programmable. The effects of computation are extending to include our homes, cities, airspace, and bloodstreams.
In short, the loop between data generation, computation, and actuation is closing. And this is no longer a niche scenario: it’s going to be a standard mode of technology going forward.
Our mission in the RISELab is to develop technologies that enable applications to interact intelligently and securely with their environment in real time.
As in previous labs, we’re all in — working on everything from basic research to software development, all in the Berkeley tradition of open publication and open source software. We’ll use this space to lay out our ideas and progress as we go.
Commitment to Diversity
RISELab is guided by Berkeley’s Principles of Community and is committed to providing a safe and caring research environment for every member of our community. We believe that a diverse student body, faculty, and staff are essential to the open exchange of ideas that RISELab was founded on.
In addition to NSF expedition, we’re extremely fortunate at Berkeley to be supported by — and working with — some of the world’s biggest and most innovative companies. The RISELab’s 13 founding sponsors are quite the crew: Amazon Web Services, Ant Group, Capital One, Ericsson, Facebook, Google, Intel, Microsoft Research, Scotiabank, Splunk and VMware. Thanks to all.
Tune is a powerful library for distributed hyperparameter tuning developed in the RISELab. Built on top of Ray, Tune allows users to easily leverage hyperparameter optimization algorithms including ASHA and Population-Based Training at scale. Tune integrates with the Ray autoscaler to seamlessly launch fault-tolerant distributed hyperparameter tuning jobs on Kubernetes, AWS or GCP. Tune supports any machine learning framework, including PyTorch, TensorFlow, XGBoost, LightGBM, and Keras.