Can weak human supervision train superhuman models?

Chien-Ju Ho won a grant from OpenAI to study weak human supervision in training powerful machine learning models

Shawn Ballard 
Using chess as an experimental domain, Chien-Ju Ho will explore how incomplete or imperfect human-labeled data can be used to train superhuman machine learning models. (Credit: Rodrigo Perez on Unsplash)
Using chess as an experimental domain, Chien-Ju Ho will explore how incomplete or imperfect human-labeled data can be used to train superhuman machine learning models. (Credit: Rodrigo Perez on Unsplash)

When it comes to machine learning, human supervision is often the most expensive and time-consuming part of developing powerful models, such as the large language models (LLMs) many people now use. Before ChatGPT could have a conversation with you or help plan your next vacation, high-quality annotated datasets and informed feedback from real humans were required to train the model. 

This type of human input might be considered strong supervision, as humans provide supervision that is stronger than the capability of machine learning models. However, as machine learning becomes more powerful and surpasses human capabilities, human supervision might become weak in comparison to the performance of machine learning models. 

Chien-Ju Ho, assistant professor of computer science & engineering in the McKelvey School of Engineering at Washington University in St. Louis, received a $133,000 award from OpenAI to explore how and when weak human supervision might be used to improve strong machine learning models. 

Weak human supervision refers to data or demonstrations provided by humans who may not be as knowledgeable and might perform suboptimally. While this type of supervision is more prone to errors, it might also offer more diverse data coverage. This can be beneficial for improving the generalizability of machine learning models.

As an analogy, lower-rated chess players might find it more helpful to learn from players who are slightly better rather than from world-class players, as they are less likely to encounter the same board positions as the world-class players. Ho’s project investigates whether weak human supervision can be used in a similar way to improve the training of machine learning models.

“Using chess as an experimental domain, our preliminary results suggest that in the small-data regime, it is more effective to use data from low-skill players than to use data from high-skill players,” Ho said. “This type of low-skill data is abundant, diverse and easily accessible online. Together with the availability of machine learning models of varying strengths in the chess domain, we will examine the conditions under which weak supervision can be effective. Our goal is to develop algorithms that can leverage a diverse set of human supervisions to improve the training of powerful models.”

Click on the topics below for more stories in those areas

Back to News