Artificial intelligence (AI) makes decisions for us every day, such as choosing which ads we see on the Internet. In high-stakes situations, such as navigating unmanned aerial vehicles for military purposes, these decisions may not be fully trusted, and humans are often brought into the loop to make the final decisions.
With a three-year, $453,000 grant from the Office of Naval Research, Chien-Ju Ho, assistant professor of computer science & engineering in the McKelvey School of Engineering at Washington University in St. Louis, and his co-investigator, Yang Liu, assistant professor of computer science & engineering at the University of California, Santa Cruz, will study AI-augmented human decision-making, with a focus on addressing how human decision biases affect the adopted policy. In addition, they will look at whether they can detect or exploit these biases of the opponent decision maker and how to design decision-making policies that are robust to opponents' attempts for exploitation.
The research is applicable to a range of applications, including scenarios in which Naval commanders who face uncertain and adversarial environments and need to adapt to situations by making a sequence of decisions. Ho said the outcomes of the research would allow them to better explain the decisions observed from the fields and open the possibility to provide interventions to Naval commanders to smooth out any potential negative effects of human behavioral biases as well as the opponent commanders so that they will deviate from a better action.