A team of U.S. computer scientists is receiving a $10 million grant from the National Science Foundation (NSF) to make machine learning more trustworthy.
The grant establishes the Center for Trustworthy Machine Learning (CTML), a consortium of seven universities, including the University of Wisconsin-Madison. Researchers will work together toward two challenging goals: understanding the risks inherent to machine learning; and developing the tools, metrics and methods to manage and mitigate these risks.
The science and arsenal of defensive techniques emerging within the center will provide the basis for building more trustworthy and secure systems in the future, as well as fostering a long-term research community within this essential domain of technology, researchers said.
"Machine learning (ML) is being used in every critical sector of our society, such as health, finance, and power,” says Somesh Jha, Professor of Computer Sciences at UW-Madison and one of the investigators on the grant. “Trustworthiness is of paramount importance in these sections. CTML will result in techniques to address this important issue and result in the use of ML in a trustworthy and secure manner.”
The award is part of NSF's Secure and Trustworthy Cyberspace (SaTC) program, which includes a $78.2 million portfolio of more than 225 new projects in 32 states spanning a broad range of research and education topics, including artificial intelligence, cryptography, network security, privacy, and usability. A new center-scale Frontier award headlines this portfolio by addressing grand challenges in cybersecurity science and engineering with the potential for broad economic and societal impacts.
"This Frontier project will develop an understanding of vulnerabilities in today's machine learning approaches, along with methods for mitigating against these vulnerabilities to strengthen future machine learning-based technologies and solutions," said Jim Kurose, NSF's assistant director for Computer and Information Science and Engineering.
Researchers will pursue three different goals. First, they will explore methods to defend a trained model against adversarial inputs. To do this, they will emphasize developing measurements of how robust defenses are; as well as understanding limits and costs of attacks. Second, researchers also will develop new training methods that are immune to manipulation. And finally, researchers will investigate the general security of sophisticated machine learning algorithms, including potential abuses of machine learning models, such as models that generate fake content. They will aim to develop mechanisms that prevent the theft of machine learning models.
Jha is an expert in information security, privacy, and formal methods. Recently he has embarked on research on adversarial ML, which focuses on issues arising in ML when an adversary is present. His expertise in formal methods will also be critical in the context of CTML.
"Machine learning is fundamentally changing the way we live and work—from autonomous vehicles, digital assistants, to robotic manufacturing—we see computers doing complex reasoning in ways that would be considered science fiction just a decade ago," said Patrick McDaniel, lead principal investigator and William L. Weiss Professor of Information and Communications Technology in the School of Electrical Engineering and Computer Science at Penn State University. "What we have found is that the algorithms and processing driving this new technology are vulnerable to attack. We have a unique opportunity at this time, before machine learning is widely deployed in critical systems, to develop the theory and practice needed for robust learning algorithms that provide rigorous and meaningful guarantees."
Computer scientists taking part in the grant are already involved in a summer school program centered on trustworthy machine learning and aimed at under-represented groups. They also are holding a series of webinars on the topic for high school students.
The grant will be led by researchers at Pennsylvania State University and, in addition to UW-Madison, includes researchers at UC San Diego, University of Virginia, Stanford University, and UC Berkeley.
Jha is the Lubar chaired professor in the Department of Computer Sciences at UW-Madison.
Contact: Somesh Jha, 608-262-9529, email@example.com