Discovering Structure in Robotics Tasks via Demonstrations and Active Learning

Monday, February 16, 2015 -
4:00pm to 5:00pm
Computer Sciences 1240

Speaker Name: 

Scott Niekum

Speaker Institution: 

Carnegie Mellon University





Future co-robots in the home and workplace will require the ability to quickly characterize new tasks and environments without the intervention of expert engineers. Human demonstrations and active learning can play complementary roles when learning complex, multi-step tasks in novel environments—demonstrations are a fast, natural way to broadly provide human insight into task structure and environmental dynamics, while active learning can fine-tune models by exploiting the robot’s knowledge of its own internal representations and uncertainties.

Using these complementary data sources, I will focus on three types of structure discovery that can help robots quickly produce robust control strategies for novel tasks: 1) learning high-level task descriptions from unstructured demonstrations, 2) inferring physics-based models of task goals and environmental dynamics, and 3) interactive perception for refinement of physics-based models. These techniques draw from Bayesian nonparametrics, time series analysis, filtering, and control theory to characterize complex tasks like IKEA furniture assembly that challenge the state of the art in manipulation.


Scott Niekum is a postdoctoral fellow at the Carnegie Mellon Robotics Institute, working with Chris Atkeson. He received his Ph.D. in Computer Science from the University of Massachusetts Amherst in 2013 under the supervision of Andrew Barto, and his B.S from Carnegie Mellon University in 2005. His research interests include learning from demonstration, robotic manipulation, time-series analysis, and reinforcement learning.