All’s fair in artificial intelligence?
A team of faculty from computer science & engineering, social work and law will investigate fairness in decision-making by artificial intelligence
Artificial intelligence is often used to automate decision-making, thereby speeding up the process and removing human bias. The humans who write the algorithms want the outcomes of those decisions to be fair for all involved, but how does one teach an algorithm what is fair and what is not?
A team of computer scientists from the Department of Computer Science & Engineering in the McKelvey School of Engineering at Washington University in St. Louis is working with researchers from the Brown School and the School of Law to develop a framework for algorithms that can make decisions with fair outcomes. The game-theory based framework, to be called FairGame, will include an auditor that can determine potential fairness violations.
"The ultimate goal is to make sure that whatever algorithms we design make decisions that are fair according to some specified criteria when these decisions are consequential for individuals," said Yevgeniy Vorobeychik, associate professor of computer science & engineering and principal investigator in the project, which is funded by a three-year, $444,145 grant from the National Science Foundation.
To do this, the researchers must first assess whether something is fair.
"We want to be as agnostic as possible to how we know our policymakers would define fairness in the particular situation and want to make sure that the algorithm is fair with respect to those criteria," Vorobeychik said. "Second, we want to use this generic approach for verifying or certifying fairness. If you design an algorithm, you have to account for the fact that someone is going to see if there are any issues with the way you're making decisions in terms of being unfair to people and ensuring that your decisions don't inadvertently discriminate against individuals or groups."
Sanmay Das, another investigator on the project, said the framework will help to analyze the properties of, or to develop a good solution to, the problem under different possible notions of fairness.
"Suppose you have a procedure for allocating resources for making decisions, and separately, some notion of equity or fairness that can be specified by society or by legal considerations," Das said. "We have to think about the interactions of the goals of the decision-maker and the constraints imposed by the equity considerations when we're thinking about developing a framework that allows us to end up with solutions that can satisfy both.
"My hope is that we can do this in general for decision procedures that will operate across different domains and may have to satisfy different fairness metrics," Das said.
Vorobeychik and Das, along with Roman Garnett, Chien-Ju Ho, and Brendan Juba, all assistant professors in the Department of Computer Science, will work with Patrick Fowler, associate professor in the Brown School, and Pauline Kim, the Daniel Noyes Kirby Professor of Law, in the School of Law to ensure that the framework abides by moral, ethical and legal constraints.
In addition to developing the framework, the team plans to develop new courses and course modules; take a lead role in the Division of Computational & Data Sciences doctoral program; seek to inform policymakers and regulators about computational approaches ensuring fairness; and work to broaden participation in computing through partnerships and summer research opportunities for students.