An NSF Expedition Project
REAL-TIME INTELLIGENT SECURE EXPLAINABLE
In the RISELab, we develop technologies that enable applications to make low-latency decisions on live data with strong security.
Prof. Ion Stoica and Xin Jin, a former postdoc in RISELab (now a faculty at John Hopkins University), have...
We are proud to announce that Professor David Patterson, one of the RISELab founders, is a recipient of the 2017...
RISE Seminar 4/12/18: C. Mohan (IBM Fellow and Distinguished Visiting Prof (Tsinghua Univ)): Landscape of Practical Blockchain Systems and their Applications
Title: Landscape of Practical Blockchain Systems and their Applications Date: Thursday, April 12th, 12-1pm, Lo...
RISE Seminar 4/5/18: Matt Johnson, Roy Frostig, and Chris Leary (Google Brain): Compiling machine learning programs via high-level tracing
Title: Compiling machine learning programs via high-level tracing Date: Thursday, April 5th, 12-1pm, Wozniak L...
Title: Probabilistic Programming and models of Cognition Date: Thursday, March 22nd, 12-1pm, Wozniak Lounge (...
March 28, 2018
UC Berkeley’s pathbreaking entry-level course on the Foundations of Data Science (Data 8) is laun...Post Views: 1437
March 20, 2018
In this blog post we introduce Ray RLlib, an RL execution toolkit built on the Ray distributed ex...Post Views: 572
March 12, 2018
This article cross-posted from the DataBeta blog. There’s fast and there’s fast. This post is ab...Post Views: 4106
Berkeley’s computer science division has an ongoing tradition of 5-year collaborative research labs. In the fall of 2016 we closed out the most recent of the series: the AMPLab. We think it was a pretty big deal, and many agreed.
One great thing about Berkeley is the endless supply of energy and ideas that flows through the place — always bringing changes, building on what came before. In that spirit, we’re fired up to announce the Berkeley RISELab, where we will focus intensely for five years on systems that provide Real-time Intelligence with Secure Explainable decisions.
RISELab represents the next chapter in the ongoing story of data-intensive systems at Berkeley; a proactive step to move beyond Big Data analytics into a more immersive world. The RISE agenda begins by recognizing that there are big changes afoot:
- Sensors are everywhere. We carry them in our pockets, we embed them in our homes, we pass them on the street. Our world will be quantified, in fine detail, in real time.
- AI is for real. Big data and cheap compute finally made some of the big ideas of AI a practical reality. There’s a ton more to be done, but learning and prediction are now practical tools in the computing toolbox.
- The world is programmable. Our vehicles, houses, workplaces and medical devices are increasingly networked and programmable. The effects of computation are extending to include our homes, cities, airspace, and bloodstreams.
In short, the loop between data generation, computation, and actuation is closing. And this is no longer a niche scenario: it’s going to be a standard mode of technology going forward.
Our mission in the RISELab is to develop technologies that enable applications to interact intelligently and securely with their environment in real time.
As in previous labs, we’re all in — working on everything from basic research to software development, all in the Berkeley tradition of open publication and open source software. We’ll use this space to lay out our ideas and progress as we go.
In addition to NSF expedition, we’re extremely fortunate at Berkeley to be supported by — and working with — some of the world’s biggest and most innovative companies. The RISELab’s 13 founding sponsors are quite the crew: Alibaba Group, Amazon Web Services, Ant Financial, Capital One, Ericsson, Facebook, Google, Huawei, Intel, Microsoft Research, Scotiabank, Splunk and VMware. Thanks to all.
Machine learning is being deployed in a growing number of applications which demand real-time, accurate, and robust predictions under heavy query load. However, most machine learning frameworks and systems only address model training and not deployment.
Clipper is a general-purpose low-latency prediction serving system. Interposed between end-user applications and a wide range of machine learning frameworks, Clipper introduces a modular architecture to simplify model deployment across frameworks. Furthermore, by introducing caching, batching, and adaptive model selection techniques, Clipper reduces prediction latency and improves prediction throughput, accuracy, and robustness without modifying the underlying machine learning frameworks.