RISE Seminar 10/19/18: Modeling (Human) Bias in Artificial Intelligence, a talk by Margaret Mitchell

October 19, 2018

Title:                 Modeling (Human) Bias in Artificial Intelligence

Speaker:           Margaret Mitchell

Affiliation:         Google

Date and location: Friday, October 19, 12:30 – 1:30 pm; Wozniak Lounge (430 Soda Hall)


Human data brings with it human biases. Algorithms trained on that data can effectively perpetuate and amplify these biases, creating feedback loops that deepen social division.  In this talk, I walk through how human bias is at play in the end-to-end machine learning cycle, and the effects this can have within society.


Margaret is a Senior Research Scientist and leads the Ethical AI team within Google Research.  Her research is interdisciplinary, combining computer vision, natural language processing, statistical methods, deep learning, and cognitive science; and she applies her work in clinical and assistive domains.  She has published over 40 papers, including top-tier conferences for NLP, Computer Vision, Cognitive Science, and AI Ethics.  She is also the co-founder of the annual workshops Clinical Psychology and Computational Linguistics, Ethics in Natural Language Processing, and Women and Underrepresented Minorities in Natural Language Processing.  Her TED talk on evolving Artificial Intelligence towards positive goals has over one million views, and the system she co-developed using her first-place image-captioning system, Seeing AI, has won the Helen Keller Achievement Award award and the Fast Company Innovation by Design award.  In her spare time, she enjoys running, drawing pictures in Paint, and lounging around with her Great Dane, Wendell.