Skip to main content

CSE Colloquia Series: Steven Wu

Mar 3
11 a.m.
Lopata Hall, Room 101

‚ÄčProtecting People from Algorithms (and Vice Versa)

Steven Wu

Ph.D. Candidate

Department of Computer Science

University of Pennsylvania

Abstract

Computing technologies today have made it much easier to gather personal data. Algorithms are constantly analyzing such personal information and making consequential decisions on people. The extensive use of algorithms can impose the risks of algorithms mistreating people such as privacy violation or unfair discrimination. There is also a risk of people mistreating algorithms. For example, in a strategic environment people may have incentives to misreport their data to game the algorithms for their own benefits.

In this talk, I will first present an overarching theme in my research---protecting people and algorithms from each other. In particular, my work seeks to (1) protect people from algorithms in developing algorithms with privacy and fairness guarantees and (2) aims to protect algorithms from people in providing algorithms that incentivize people's truthful behavior. 

I will then present two technical results in my work on differential privacy, a rigorous algorithmic notion for data privacy.  The first result focuses on a fundamental problem in differential privacy---private query release.  I will present a scalable algorithm that can accurately and privately answer a large collection of counting queries for high-dimensional data. In the second result, I will focus on a general framework for solving a family of economic optimization problems under a strong relaxation of differential privacy.  I will also demonstrate how differential privacy can be used as a novel tool to incentivize truth-telling when the algorithms need to elicit input data from self-interested participants.

Biography

Steven is currently a PhD candidate in computer science at the University of Pennsylvania, where he is co-advised by Michael Kearns and Aaron Roth. His primary research interests are in developing theory and algorithms for privacy-preserving data analysis. He has also been studying machine learning in economic environments, where he designs algorithms that learn from observed economic behavior and create desired incentives for the participants. His more recent research focuses on fairness in machine learning, where the goal is to provide fair decision-making algorithms that learn from personal data.

During the summer of 2015 and the spring of 2016, he was a research intern at Microsoft Research in New York City (MSR-NYC), and during the summer of 2016, he was also a research intern at Microsoft Research New England (MSR-NE).