While debugging is part and parcel of software development, it has remained an afterthought in the world of machine learning, says Jerry Zhu, who holds the Sheldon B. and Marianne S. Lubar Professorship in Computer Sciences.
Zhu and collaborators in the University of Wisconsin-Madison Computer Sciences Department are working to change that, choosing training set debugging as their first problem to tackle.
New research by graduate student Xuezhou Zhang and professors Zhu and Steve Wright was featured at the annual conference of the Association for the Advancement of Artificial Intelligence, held in New Orleans in February 2018. Zhang presented the team’s paper, “Training Set Debugging Using Trusted Items.”
In simple terms, machine learning is the branch of computer science that allows computers to learn without being explicitly programmed. Information known as training data provides sample input from which the computer learns to make future predictions or decisions.
A common example is classifying photos of animals as either dogs or cats; based on sufficient training data of reasonable quality, a computer program should be able to distinguish the two with a good degree of accuracy.
Yet there are times when a machine-learning model is not giving the results expected, and its user suspects there are bugs in the training data. This is where the team’s idea of training set debugging comes in.
“It’s usually very difficult to figure out what happened,” says Zhu. “We want to bring some accountability into the machine-learning pipeline, and therefore, we hope, make machine learning models more trustworthy.”
The team’s approach requires an additional data set of “trusted items” in the process. Says Zhu, “These are items that some expert has spent a lot of effort studying to make sure that the labels are correct. As you can imagine, in general, it’s very expensive to get such high quality data.”
Then, says Zhu, after training a machine learning model using a potentially buggy (partially mislabeled) training set, that model can be tested on the set of trusted items. “If your original training set does contain bugs, it’s likely your learned model is also problematic. When it’s applied to the trusted items, it will predict labels different than the true, trusted labels.”
Of course, in the real world, stakes are much higher and more complex than classifying cute pictures of fluffy pets. Machine-learning systems like image classifiers have wide-ranging practical uses that will only grow in importance in the years ahead.
For example, in order for a self-driving car to take the proper action at a stop sign, it needs to reliably recognize them—even when they don’t all look alike (think of signs that are faded, bent or tagged with graffiti).
Once realizing that a training data set has problems, the team goes back to that set and tentatively flips a subset of labels (e.g., from cats to dogs), uses it to retrain the machine learning model, and then applies that model to the trusted set again. “What you want, ideally, is the smallest subset you need to flip such that, when you retrain, your learned model agrees perfectly with the trusted set.”
It might seem like training machine-learning models with very large sets of trusted data would be an easy, obvious solution, rather than the team’s more complex, optimization-based approach. Yet in real-world situations, that would be cost prohibitive and time consuming.
Machine-learning models are applied to many things that affect daily life, such as processing loan applications, working with medical data, and other areas.
Next up for the team, says second-year graduate student Zhang, is additional work to make their system even more efficient and easy to use.
The UW-Madison researchers’ work was funded in part by several awards from the National Science Foundation, including a grant for formal methods for program fairness, and a new grant from NSF’s TRIPODS program for transdisciplinary initiatives in the fundamentals of data science.