Written by Stephen Ornes. Illustrations by Steve Edwards.
Last fall, pet lovers rejoiced when Google unveiled a new feature for its photo storage application. Thanks to cutting-edge technology, users who want to find images of their cats or dogs, or pictures of themselves with their pets, no longer have to scroll through thousands of photos. Instead, they can search the images using only the pet's name. The software does the rest.
The feature runs an algorithm, a mathematical recipe, that teaches the device how to recognize the faces. Facebook, Apple and Google incorporated human facial recognition software into their photo software years ago, so this app may have been inevitable.
Facial-recognition software represents yet another sign that artificial intelligence is steadily infiltrating daily life.
The term “artificial intelligence” broadly refers to ongoing efforts by computer scientists and others to develop machines that can not only perform tasks that seem to require intelligence but also can learn to improve their performance: AI algorithms get smarter the more they are used. And recent developments have been inspired by the structures of the human brain.
Facial recognition software imitates the brain's hardwired ability to identify patterns; the more photos a program studies, the more accurate it gets. And while the goal of many AI projects is to get machines to do things that humans ordinarily do, some say it may be most useful when it automatically does the things that humans don't want to do.
Around the turn of the 21st century, machine learning began to take off. A subfield of computer science, machine learning uses large datasets to train a computer to make predictions and decisions independently. It is the success of machine learning that has made AI so ubiquitous that most people interact with AI on a daily basis. Voice-recognition programs built into smartphones and computers, such as Siri and Alexa, use AI to determine how to respond to spoken requests. Netflix uses algorithms to recommend movies. Online stores such as Amazon use AI to set prices and suggest items to shoppers.
The technology isn't perfect. Self-driving cars aren't ready for all public roadways, as evidenced by the self-driving Uber striking and killing a pedestrian in Arizona in March. Amazon shoppers have been baffled by the program's purchase suggestions at one time or another. Those flaws show that there's a lot of work to be done to make AI useful.
At the same time, the goal of AI isn't a system that's perfect, but one that can learn from its mistakes, get better and heal its virtual wounds. At the School of Engineering & Applied Science at Washington University in St. Louis, researchers are leveraging the power of AI to address the challenges of making this technology useful in the real world. They're collaborating with colleagues on a range of interdisciplinary projects, from the best way to help the homeless to programming drones that can monitor farmers' fields or find survivors in disaster-stricken areas.
"AI really captures the imagination regardless of your familiarity with computing," says Aaron Bobick, dean of the School of Engineering & Applied Science and the James M. McKelvey Professor. "It challenges the notion that intelligence is predominantly a human characteristic, which makes it both startling and sometimes alarming." At the same time, for the technology to mean something, "it has to be useful in the human world."
The eyes of a computer
As a graduate student at MIT, Bobick grew interested in computer vision and focused his research career on how to teach a computer to perceive human behavior and act on those observations.
As research into AI and robotics grew, Bobick later founded Georgia Tech's School of Interactive Computing, a department that focuses on understanding how computing can change the way people interact with information and the world. Fundamental to those questions are how robots engage with people in the human world.
Bobick said he thinks AI can help robots and people interact in a meaningful way.
"Right now, most robots exist in worlds that are designed for robots," he says.
At car manufacturing plants, for example, robots do the heavy lifting, including assembling the car's body. But those robots are kept behind fences and interact with human workers only when they're deactivated.
"What we really want to be able to do is to have humans and robots integrated much better," Bobick says. "To do that, robots have to be able to anticipate what people are going to do."
When Bobick arrived at WashU in 2015, he found a research environment primed to push the usefulness of AI forward.
“We want to leverage the growth in AI with the areas at WashU in which we have deep strengths,” he says.
From helping the homeless to saving disaster survivors
The work of Sanmay Das, associate professor of computer science & engineering, shows that kind of collaboration. Das trained as a computer scientist, but most of his projects find him exploring the intersection of AI and social science. With researchers in the finance industry, he's developed machine-learning tools designed to help credit card companies and regulatory agencies identify individuals at risk of making late payments or defaulting on debt. He's looking at ways to scale those tools up so they might be used to analyze the economic performance and risk of banks. Many of his projects involve using AI approaches to help systems distribute scarce resources.
On another project, he's been analyzing homelessness as a data science problem. More than half a million people may live on the streets in the United States. City governments offer programs including shelters and clinics to help this population, but Das says there is an opportunity to use vast amounts of data on people, their interactions with the system and their outcomes to understand the effects of different kinds of interventions on different people and to use that understanding to improve resource allocation.
Working with psychologist Patrick Fowler, associate professor in the Brown School, Das developed algorithms to predict whether homeless people were likely to return to the homelessness system after receiving help. The algorithm aims to find optimal ways for a city to deliver aid.
"Who are the kinds of people who would benefit from one type of homeless service or another?" Das asks.
Such research will be a part of the university’s new Division of Computational and Data Sciences aimed at training students interested in applying data and computing to some of today’s important societal problems. The new division, which will offer a doctoral degree, will be led by faculty from the Brown School, the School of Engineering & Applied Science, and Arts & Sciences.
Das sees thematic parallels between that type of research and precision medicine, which uses a patient's genomic data to identify the optimal treatment for a disease. He has been studying kidney transplant systems with colleagues at Barnes-Jewish Hospital. When a person experiences organ failure, neither kidney works, and the person needs a replacement. In roughly one-third of cases, a kidney is donated by a compatible living donor. But sometimes the donor and the recipient are incompatible.
In those cases, they might enter a kidney exchange where the donor gives a kidney to another patient in need, and that patient's donor gives a kidney to the original patient. This idea has given rise to chains of donors, Das says, but he suspects those exchanges could be more efficient. He is building AI algorithms to optimize those cycles of donations and produce more matches, with the goal of getting kidneys to the people who need them.
William Yeoh, assistant professor of computer science & engineering, also says he's motivated by the intention of using AI where it is most beneficial. Recently, he has built AI systems to enable communication among groups of connected devices. Imagine a cluster of small flying drones that use cameras to monitor a farmer's fields for signs of drought or scan a disaster-ridden city for survivors in need of help. Those devices need to talk with each other while flying to avoid collisions and to cover the most area in the least amount of flying time.
Yeoh points to another situation that may seem unrelated, but actually poses a very similar challenge: the "smart home." People are increasingly buying household appliances and other devices connected through a wireless router. This is the Internet of Things, a vision of a world where anything can be programmed to respond to the world around it. Smart refrigerators can help one achieve a healthier diet; smart garage doors can open with a swipe on a smartphone. Yet for all of their smarts, these gadgets don't know about each other.
"They don't really interact," Yeoh says, but they could. AI could be used in algorithms that unify all these devices and teach them how to respond to the homeowner.
In Yeoh's vision of a smart home, a short statement such as "I want to watch something" triggers a chain of events: The lights dim, the temperature lowers and a show begins. And this connectivity arises organically: The system becomes more intelligent because the devices talk to and learn from one another.
Bugs, subs and vital signs
When Xuan "Silvia" Zhang, assistant professor of electrical & systems engineering, thinks about how to make AI useful in the real world, she looks to nature. Insects provide a ruthlessly efficient model: What they lack in cognition they make up in unmatched agility. Zhang and her students have been building insect-inspired, autonomous flying robots that use AI to zip through the air like bugs. Small devices require less power and less money to build and offer a cheap and efficient way to increase the reach of AI. But they pose some challenges, like integrating AI algorithms into small spaces and designing systems of miniature batteries and sensors. In her lab, Zhang develops systems of sensors and batteries that can store and distribute power in devices, including AI-powered robots.
Yixin Chen, professor of computer science & engineering, is looking for ways to extend AI's reach into the hospital setting. Last fall, Chen launched a project with anesthesiologist Michael Avidan, MBBCh, at the School of Medicine. In a pilot study, they are using AI to monitor the vital signs of patients undergoing surgery in all 48 operating rooms at Barnes-Jewish Hospital. The data, displayed on screens in a nearby room, is monitored by a clinician. Green lights indicate normal vital signs, yellow suggests trouble and red lights flag an urgent problem.
Identifying real-world applications for AI isn't the only challenge to making the technology useful. There also are theoretical hurdles, like understanding how an algorithm works in a particular situation, and design challenges, such as developing new materials that incorporate AI. WashU researchers are working on these fronts as well.
Vijay Ramani, the Roma B. & Raymond H. Wittcoff Distinguished University Professor of Environment and Energy, works at the intersection of electrochemical engineering and materials science. He and his collaborators have designed batteries and fuel cell systems for autonomous submarines and other submersibles that might be useful for military and research use. And Brandon Juba, an assistant professor who teaches AI classes in computer science & engineering, works on the theoretical development of algorithms that not only learn, but can reason using common sense.
When Bobick came to WashU, one of his goals was to foster projects where humans and computers interact in a way that builds on the university's research strengths. AI, he says,
can be used to make effective systems that benefit humankind. Its ultimate goal isn't to take over the world, but to help people make good choices.
"You end up with decisions that get made better because they're based on models, supported by a lot of data," he says. "We want computers to be partners with people."
As part of that initiative, in 2019, WashU will join such prestigious universities as Stanford, Princeton, Boston, Carnegie Mellon, Simon Fraser and the University of California, Berkeley, to host AI4All, a summer education program that emphasizes using AI for social good for high school students from underrepresented groups.
The late scientist Stephen Hawking once said that AI would be either the best or the worst thing ever to happen to humanity, but it was too early to tell which.
AI now touches nearly every aspect of existence in modern societies and provides data on our bodies through wearable trackers, energy usage, road usage and our social networking behavior. All of this has brought some to call for AI to be regulated, including Elon Musk, CEO of Tesla.
"I think whenever something is — whenever there's something that affects the public good then there does need to be some form of public oversight," Musk said on CBS News on April 11. "I do think there should be some regulations on AI. I think there should be regulations on social media to the degree that it negatively affects the public good."
Das said with the availability of this data comes the opportunity to learn more about ourselves as well as to make better decisions.
"I expect, in the future, to see a society in which many more decisions, both at the individual and societal level, are taken by combinations of humans and intelligent agents, from decision-making about medical interventions, to traffic routing and eventually driving, to the allocation of societal resources," Das said.
“The development and validation of AI algorithms that can participate in such decision-making in an economically efficient and equitable manner, respecting human judgments of morality and ethics, will be a major area of research and development.”
While there are benefits to AI, Yeoh says there are risks — as with any technology — such as the self-driving vehicles and Facebook's sharing of data of 50 million of its users with Cambridge Analytica, the British political consulting firm that used data mining to target Facebook users with political posts.
"AI researchers are very cognizant of these risks and are actively looking into incorporating ethics in AI education and training, such as at conferences dedicated to ethics in AI," Yeoh said. "We also will need to do a better job at reaching out to the general public to inform and educate them on the strengths and limitations of AI techniques. That way, the general public will better understand how their data might be used and take steps toward ensuring that they are only rightfully used; they will also have reasonable expectations of AI technologies instead of hype-driven inflated ones."