Consistency, trustworthiness in large language models goal of new research

Chenguang Wang plans to improve grounding ability with Google Research Scholar award

Chenguang Wang plans to work with Google DeepMind researchers to improve the grounding ability of large language models with Google Research Scholar Award. (Image created by OpenAI DALL·E)
Chenguang Wang plans to work with Google DeepMind researchers to improve the grounding ability of large language models with Google Research Scholar Award. (Image created by OpenAI DALL·E)

Large language models (LLMs), such as Chat GPT, are becoming part of our daily lives. The artificial intelligence models are trained with vast amounts of data to generate text, answer questions and prompt ideas.

However, one of the fundamental issues of these models is that they need more grounding, which refers to the ability to link their knowledge to real world facts. Failure to ground will lead to LLMs producing responses that are not consistent and unsafe, for example, misinformation and hallucinations. The recent White House executive order and open letters signed by thousands of researchers have highlighted its importance, especially in critical domains such as health care and science. 

Chenguang Wang, an assistant professor of computer science & engineering at Washington University in St. Louis, plans to work with Google DeepMind researchers to improve the grounding ability of these models with a Google award titled “Before Grounding Responses: Learning to Ground References of Large Language Models.” This award provides a $60,000 gift, which supports early-career professors who are pursuing research in fields core to Google.

The goal of this research is to make LLMs more consistent, safe and trustworthy, by improving their grounding ability, which may lead to broader use in different areas. Wang plans to provide a new paradigm to ground LLMs to reference or source documents and focus on grounding references that LLMs used to answer questions. The proposed framework consists of a series of new learning techniques to help models use the correct reference when responding to user queries in different scenarios.

Wang’s lab has been working to improve the grounding ability of LLMs. His prior work, DeepStruct, is a 10B parameter LLM, which outperformed GPT-3 175B in 21 factual grounding-oriented datasets.


The McKelvey School of Engineering at Washington University in St. Louis promotes independent inquiry and education with an emphasis on scientific excellence, innovation and collaboration without boundaries. McKelvey Engineering has top-ranked research and graduate programs across departments, particularly in biomedical engineering, environmental engineering and computing, and has one of the most selective undergraduate programs in the country. With 165 full-time faculty, 1,420 undergraduate students, 1,614 graduate students and 21,000 living alumni, we are working to solve some of society’s greatest challenges; to prepare students to become leaders and innovate throughout their careers; and to be a catalyst of economic development for the St. Louis region and beyond.

Click on the topics below for more stories in those areas

Back to News