The human brain is a remarkable inspiration for computer science and artificial intelligence. A systems architect in the McKelvey School of Engineering at Washington University in St. Louis seeks to make information processing in future computing systems and mobile devices more efficient — both in speed and energy — by modeling it after the brain's neural network.
Xuan "Silvia" Zhang, assistant professor of electrical & systems engineering, plans to improve computer performance while saving energy with a five-year, $500,000 CAREER Award from the National Science Foundation. The awards support junior faculty who model the role of teacher-scholar through outstanding research, excellent education and the integration of education and research within the context of the mission of their organization. One-third of current McKelvey Engineering faculty have received the award.
Computers take in analog signals, or waves, and convert them to digital signals, the binary numbers made up of zeros and ones.
"A lot of the signals the computer gets from sensors are analog in nature, so to perform digital computation, you have to do this conversion step," she said. "It wastes a lot of energy and also results in a huge number of sometimes redundant digital bits. As it turns out, saving and moving these digital bits around when they become very large is the most inefficient thing in the computer."
Zhang, who studies computer architecture and integrated circuit design and automation, seeks to apply mathematical formulation from the brain's neural network to the design of computer chips using a technique called neural approximation.
"We try to approximate the function using hardware that can function as a neuron, and by doing that, it opens up a very promising direction," Zhang said. "I plan to apply this neural approximation to save the unnecessary conversion step and to keep from moving the digital bits as much. By doing this, you can make the computer consume less power and run faster."
This method has several potential applications, Zhang said. In computers, memory is stored in a separate place from where it is processed. Inspired by the brain, Zhang plans to co-locate the memory and its processing location on the computer chip, an emerging research area known as in-memory computing.
"When you read from memory, it's an analog signal, then you compute based on that signal," she said. "I can apply my method to this because my method doesn't require everything to be converted to digital.
"I envision that in-memory computing can address a lot of today's important applications, like big-data analytics, machine learning and artificial intelligence, because all of these algorithms are data-intensive computational tasks, which means that you move these data in the memory around, and that costs the most in efficiency."
In addition, the framework could help with near-sensor processing, such as the types of sensors used in self-driving cars that generate a lot of data every second.
"It is challenging to build efficient and high-performance information processing for this type of application because there is so much data generated from these sensors," Zhang said. "My idea is to bring the processing capability closer to the sensor to perform the computation near the sensor. And this naturally fits within the framework I'm proposing because the sensor generates an analog signal and doesn't have to be converted to digital bits."
Zhang also plans to build an automated approach to design computers using this novel neural-network-inspired information processing approach.
"If this method works, then it can really change and reshape the landscape of the industry because it's an approach that allows you to co-design the hardware and software," Zhang said.
As part of the work, Zhang plans to integrate this framework into the curriculum for the courses she teaches as well as in her lab, where they build simulation and experimental platforms for miniature autonomous robotic cars. She also plans to develop an educational kit for K-12 teachers.