Abstract
The majority of today’s ML models are approximate solutions to minimization problems. This works well when we’re designing functions to perform well-defined tasks in isolation with quantifiable performance metrics, but this turns out not to cover many behaviors and capabilities we associate with true intelligence. In fact, attempts to replace, scale up, or conceive of notional roles in human society in terms of quantifiable loss functions, hence well specified ML models, has resulted in a range of real and perceived social problems, helping fuel a backlash against tech. In this talk we’ll explore the limits of optimization and chart some paths forward that might both allow ML to better integrate into human sociotechnical systems and offer productive routes toward general intelligence.
Bio
Blaise Agüera y Arcas is a VP and Fellow at Google Research, where he leads an organization working on both basic research and new products in AI. His focus is on augmentative, privacy-first, and collectively beneficial applications, including on-device ML for Android phones, wearables, and the Internet of Things. One of the team’s technical contributions is Federated Learning, an approach to training neural networks in a distributed setting that avoids sharing user data. Blaise also founded the Artists and Machine Intelligence program, and has been an active participant in cross-disciplinary dialogs about AI and ethics, fairness and bias, policy, and risk. Until 2014 he was a Distinguished Engineer at Microsoft. Outside the tech world, Blaise has worked on computational humanities projects including the digital reconstruction of Sergei Prokudin-Gorskii’s color photography at the Library of Congress, and the use of computer vision techniques to shed new light on Gutenberg’s printing technology. Blaise has given TED talks on Seadragon and Photosynth (2007, 2012), Bing Maps (2010), and machine creativity (2016), and gave a keynote at NeurIPS on social intelligence (2019). In 2008, he was awarded MIT’s TR35 prize. In 2018 and 2019 he taught the course “Intelligent Machinery, Identity, and Ethics” at the University of Washington, placing computing and AI in a broader historical and philosophical context.