Mar 1, 2019
Lopata Hall, Room 101
Trustworthy Machine Learning from Untrusted Models
Many of today's machine learning (ML) systems are not built from scratch, but are "composed" from an array of primitive models. This paradigm shift has significantly simplified the system development cycles. Yet, as most primitive models are contributed by untrusted third parties, their lack of regulation, standardization and verification entails profound security implications. In this talk, I will demonstrate that malicious primitive models pose immense threats to the security of ML systems. I will present a general class of backdoor attacks wherein malicious models, once integrated into ML systems, are able to fully control the behaviors of host systems. I will then describe two effective countermeasures against such threats, namely, offline model checking which verifies whether a third-party model is backdoor-free, and runtime system auditing which detects and repairs abnormal system behaviors. Finally, I will discuss the potential challenges of realizing the ultimate vision of "lifelong security" that enforces security assurance throughout the lifecycles of ML systems. Through this talk, I hope to raise the awareness of ML security issues and promote more principled practice of building and operating ML systems.
Prof. Ting Wang is currently an Assistant Professor in the Computer Science and Engineering Department at Lehigh University. Prior to joining Lehigh, he obtained his doctoral degree from Georgia Institute of Technology. Prof. Wang conducts research at the intersection of machine learning, privacy and security. His ongoing work focuses on making machine learning-based systems more practically usable through mitigating security vulnerabilities, enhancing privacy awareness, and increasing decision-making transparency. Prof. Wang is a recipient of the NSF CAREER Award and the IBM Research Innovation Award. His work has been recognized by multiple best paper awards from venues including IEEE CNS and ACM AISec.