Image created using Midjourney, a generative AI program that creates images based on text prompts. Text prompt: Colorful illustration of a student at a computer using Midjourney to create a beautiful image. All illustrations in this story were a partnership between human and artificial intelligence.

The Artificial Intelligence Boom

The rapid rollout of new forms of artificial intelligence has led to widespread adoption, but also a lot of questions: Is this a benefit or a cause for concern?

Beth Miller  • 2023 Fall issue

A new world in artificial intelligence (AI) opened in the past year when OpenAI released ChatGPT, a large language model that scours the Internet for words in response to a question. Since then, similar programs and new technologies continue to be released while their developers call for government regulation and a six-month moratorium to slow things down, leaving many uncertain if the advances are beneficial or cause for concern. 

We’re told that artificial intelligence will make our lives easier. Siri, Apple’s virtual assistant launched in 2010, is used daily by hundreds of millions of Apple users to get directions, check weather and dozens of other tasks. ChatGPT, which is based on machine learning, is said to turn out eloquent prose, and generative art programs can create stunning, realistic images in seconds. But we’re also told that about 80% of the U.S. workforce could have at least 10% of their work tasks affected in some way by the introduction of large language models, according to a March 2023 study by OpenAI, OpenResearch and the University of Pennsylvania.

‘AI will not replace you, but the person using AI will.’

“What happens in the age of AI is going to supercharge humans and automate processes even more than before,” said Gaurav Garg, who earned bachelor’s and master’s degrees from the engineering school in 1988 and 1990, respectively, and is founding partner of Wing Venture Capital, a member of the McKelvey Engineering National Council and of the university’s Board of Trustees. “The world is going to shrink even further, and the ability to work remotely with AI will remove all cultural language accent barriers so we will be in a truly global labor marketplace.”

And while we start to grasp one new development, things change almost daily.

“The AI I know today might be different from the AI I know tomorrow. How do we cope with that?” asked Ning Zhang, assistant professor of computer science & engineering. “I don’t think any of us has the answer right now.”

Artificial intelligence researchers in the McKelvey School of Engineering are watching this dizzying array of developments with interest as they both study these new technologies and incorporate them in their own research. Although McKelvey Engineering faculty study many areas and aspects of artificial intelligence, this story will focus only on a few areas as well as how the school is integrating AI into the curriculum.

What happens in the age of AI is going to supercharge humans and automate processes even more than before.

- GAURAV GARG

AI and medicine

McKelvey Engineering faculty have had longstanding collaborations with faculty at the School of Medicine to improve diagnostics and ultimately, patient care. Integrating artificial intelligence into medicine has the potential to hasten precision medicine as well as introduce time-saving methods that will allow health care providers to spend more time with patients.

Yixin Chen, professor of computer science & engineering, has been involved in a variety of projects in this field, including helping to develop an Anesthesia Control Tower. Like an air traffic control tower, the Anesthesia Control Tower uses machine learning to detect unforeseen events in anesthetized patients’ health before they become emergencies, such as rapidly dropping blood pressure or an allergic reaction. Chen is now leading a team of computer science faculty and students to develop and improve machine learning models for a Telemedicine Control Tower that can reveal patterns in patient data.

“These patterns can help clinicians identify patients who might benefit from extra attention, facilitating earlier diagnosis of medical problems and earlier initiation of treatments,” Chen said. “These patterns might also help clinicians deliver targeted, more personalized care by allowing them to identify which medications or other interventions are most likely to benefit a particular patient.”

Chenyang Lu, the Fullgraf Professor in the Department of Computer Science & Engineering, also has multiple collaborations with faculty from the School of Medicine that have been used to predict physician burnout, surgical outcomes and outcomes of treatment for mental health disorders. Most recently, he and collaborators developed a deep-learning model called WearNet, in which they studied variables collected by the Fitbit activity tracker, including total daily steps, average heart rate and sedentary minutes. WearNet did a better job at detecting depression and anxiety than state-of-the-art machine learning models and produced individual-level predictions of mental health outcomes.

Data from wearables, which are used by about 25% of the U.S. population, could act as an automated screening tool for depression or anxiety in people who may be reluctant to visit a physician or a psychiatrist, Lu said.

Lu said the way physicians and nurses work will change as AI assistants are incorporated into their workflows. 

“Our health care professionals need to use AI effectively as powerful assistants in their daily work,” he said. “As often stated, AI will not replace doctors, but doctors who use AI will replace those who do not. If we do it right, we will have a much more efficient, effective and affordable health care system through the integration of AI.” 

As often stated, AI will not replace doctors, but doctors who use AI will replace those who do not. If we do it right, we will have a much more efficient, effective and affordable health care system through the integration of AI.”

– CHENYANG LU

Both Chen and Lu stress that there is a potential downside to AI integration.

“AI/machine learning learns from the past, including both the things we have historically done well and the things we have done less well,” Chen said. “We all know the American health care system has produced very different health outcomes for individuals from different racial, ethnic and other backgrounds, and we must be mindful about how AI/machine learning tools are designed and implemented to prevent these inequities from being perpetuated or even magnified.”

“AI may also generate misinformation and biased advice,” Lu said. “We need to provide safeguards against any potential harm while incorporating AI into medicine.” 

Cyberphysical systems and safety

The world isn’t replacing human intelligence with artificial intelligence but augmenting it through statistical methods that leverage the availability of large amounts of data, said Bruno Sinopoli, the Das Family Distinguished Professor and chair of the Preston M. Green Department of Electrical & Systems Engineering. 

“This is drastically changing science and engineering because this data was previously not available,” he said. “With this data, we can now address questions that could not be answered by previous methods, which had limitations in modeling very complex engineered systems. At the same time, we have to face a number of challenges associated with the uncertainty introduced by data-driven models,” Sinopoli said.

Safety is of ultimate concern in cyberphysical systems such as self-driving vehicles, drones, robotic surgical systems and smart grids. They must be able to withstand malicious activities by adversaries trying to create misinformation and chaos. For example, an autonomous car needs to be robust enough to withstand attacks that may cause a crash. Machine learning is often used to discern malicious behavior from normal behavior, said Yevgeniy Vorobeychik, associate professor of computer science & engineering.

While the AI developers are pushing the performance of AI to the next level, how do we make sure that lead is actually deployed in the real world and that we’re safe?”

– NING ZHANG

Vorobeychik studies the integration of AI in autonomy and how to do it robustly with a complementary set of approaches, including verifying the properties of the system and making sure that it is safe and can withstand adversarial perturbations. He said it is hard to know when a system is secure.
 
“You don’t ever know until people have earnestly tried to attack, and it’s failed,” he said. “You can never say the system is secure in an absolute sense, but you can make sure it is secure with respect to a specific threat model of what can go wrong.”
 
Vorobeychik, Sinopoli and Zhang are all members of the Center for Trustworthy AI in Cyberphysical Systems at Washington University, under which faculty conduct a broad scope of research to study the safety and security of AI-driven cyberphysical systems.
 
“AI is a powerful tool that we need to master, and just like any other tool, there will be good and bad aspects,” Zhang said.
 
Zhang showed his students an example of the bad aspects where his computer thought that a photo of a Golden Retriever was a bowl of guacamole. Four pixels in the photo of the Golden Retriever were altered simply by changing the percentage of three colors in the pixels.
 
“These are the pixels that an attacker would modify such that the image looks perfectly ok to a human but looks different to the machine,” Zhang said. “Now imagine if this were the adversarial example submitted to a missile locking system or a self-driving car.”
 
Zhang said he and Vorobeychik are working to address the robustness of cyberphysical systems as well as the resiliency to leverage the machine learning in the cyberphysical systems.
 
“One thing that motivates me to study this is that I see AI is going to make fundamental changes in all sectors,” Zhang said. “While the AI developers are pushing the performance of AI to the next level, how do we make sure that lead is actually deployed in the real world and that we’re safe?”

A sampling of faculty research in AI

Many faculty in the McKelvey School of Engineering focus their research on artificial intelligence and look at it from various angles. In addition, degree programs in which artificial intelligence and machine learning are the focus are already in place.

A new minor in biomedical data science, offered through the Department
of Biomedical Engineering, had 28 students at the end of the Spring 2023 semester: one third were majoring in biomedical engineering, and the rest had a major outside of biomedical engineering.

A. Andrew Clark

Andrew Clark is interested in introducing new game- and control-theoretic models that describe the impact of cyberattacks and other disturbances on physical infrastructures.

B. Katharine Flores

Katharine Flores is incorporating artificial intelligence-based algorithms to identify which metal alloys are best to form metallic glasses with funding from the National Science Foundation.

C. Nathan Jacobs

Nathan Jacobs’ research centers on developing learning-based algorithms and systems for extracting information from large-scale image collections. He has applied this expertise in many application domains, with a particular focus on geospatial and medical applications.

D. Yiannis Kantaros

Yiannis Kantaros' research focuses on developing scientific principles for improving the safety, robustness, efficiency and versatility of autonomous robot teams.

E. Jr-Shin Li

Jr-Shin Li’s lab combines artificial intelligence with systems theory to develop a more efficient way to detect and accurately identify an epileptic seizure in real-time.

F. Alvitta Ottley

Alvitta Ottley’s lab is developing tools to analyze and understand social data, including crime data, social media posts and home listings. Additionally, they use research methods from the social sciences to inform how to present the right data in the right way to the viewer.

G. Chenguang Wang

Chenguang Wang’s work focuses on fundamental natural language processing research including deep learning models, large language models, knowledge graphs, and language generation.

H. William Yeoh

William Yeoh and his collaborators recently received a $3 million grant from the National Science Foundation to advance AI in computational, environmental and social sciences.

AI and education

In addition to research, McKelvey Engineering faculty are adapting curriculum so that students become AI-informed and AI-aware. 

“We can’t be afraid to address these new technologies because the implications are coming regardless, so we’d better have as many tools as possible,” said Aaron Bobick, dean and the James M. McKelvey Professor, who is a pioneer in action recognition by computer vision. “We need an AI-informed citizenry. One key to a rational future is to have everyone understand as much as possible about these systems, and that’s why I’m willing to commit to all McKelvey Engineering students being AI aware.” 

To get there, McKelvey Engineering faculty are looking at how best to integrate AI into the curriculum. In fact, tackling the emerging challenges and evolving opportunities related to artificial intelligence and AI-assisted systems is an initiative in new strategic plan being implemented in the school and joins the university’s goal of a “digital transformation.”

We can’t be afraid to address these new technologies because the implications are coming regardless, so we’d better have as many tools as possible.”

– AARON BOBICK

“We are looking at how we contextualize artificial intelligence and machine learning as an experience for all,” said Jay Turner, vice dean for education and the James McKelvey Professor of Engineering Education. “When the curriculum is ideal, there are always two points: one is how we leverage technology to improve teaching in the student experience, and the other is teaching the skill sets that students need to have for a career in which this will be part of their daily life. We want to prepare them to have a mindset that as these tools evolve and become more available, they will be opportunistic in how they can use that to their level.”

Academia also has a role to play in AI integration.

 “Academia has a unique responsibility to be transparent about its methods and data used to train algorithms,” said Alvitta Ottley, assistant professor of computer science & engineering. “More often than not, academia performs government-funded work with an increasing requirement for open access. It can also provide opportunities for society to provide feedback on AI systems and participate in the design of algorithms.”

Back to Engineering Momentum