Left Picture

Michael Gleicher

Professor
Department of Computer Sciences
University of Wisconsin, Madison
1210 West Dayton St.
Madison, WI 53706
gleicher@cs.wisc.edu

Office: 6385 Computer Sciences

Office Hour: None this year - I am on sabbatical.

I am on sabbatical for the 2023-2024 academic year. I am generally around Madison, but am avoiding standard academic duties to focus on some projects.

I am a professor working in areas related to Visual Computing. My research these days is mainly about robotics and data visualization. With both, I am interested in how we can make them useful for people. I remain interested in animation, virtual reality, multimedia, …

A brief biography will tell you how I got here. You can see a reasonably current CV, but you probably are looking for papers, talks, videos or advice.

Teaching: None this year - I am on sabbatical. Last year I taught CS765 Data Visualization (Fall 22) and CS559 Computer Graphics (Spring 23).

I have some pages with various Advice I generally give to students. This includes the format for status reports, what I’d like to see in Prelims and Theses, my grad school FAQ, or my advice on how to give a talk.

You might be interested in my grad school FAQ. Come and talk to me if you’re interested in data visualization, robotics, computer graphics or related topics. If you are an undergrad and looking to work on a project, please see Undergrad Research, Projects and Directed Studies. If you are asking about a reference letter, please see Reference Letters for Students in Classes.

If you’re interested in joining our group, come talk to me! If you aren’t a student at Wisconsin yet, please look at my grad school FAQ, particularly the last few questions.

Current Research Themes

The projects list was more than slightly out of date. I need to revitalize it. But, there are several things going on with robotics (tele-operation, providing awareness to remote users, using novel sensors, …) and visualization (summarization, text collection exploration, uncertainty, …).

spot-web.jpg
Shared Autonomy for Robotic Inspection: We are developing robot solutions to automate inspection, where a mobile robot with a set of sensors scans through a space. Such applications involve a human collaborator, to specify the task, to supervise operation, to assess the results, or to guide the robot through challenging aspects. We seek to develop systems that share control: automating as much as possible, but allowing for user contributions as necessary. This application project is driving techical work in motion synthesis, mobile sensing, and user experience.
Problem_Space.PNG
Visualization Theory: Summarization, Uncertainty, ...: We are exploring very basic questions in how to present information with visualizations. We are examining the central concept of summarization to understand how people use summaries and what strategies can be used to create them. This leads to a broader concept of understanding how people use visualizations to ask and answer questions. We are trying to codify the process for creating effective visualizations to make it easier for designers.
figures-05.png
Awareness of (and with) Robots: We are interested in how we can help human stakeholders (operators, observers, etc.) have appropriate understanding of robots and their situations. This requires us to design methods (such as visualizations) that help communicate robot state, environment, plans, and history to users. One aspect that we explore is using robots to provide viewpoints (move cameras) to help observe robots (or other aspects of the environments).
calibration.png
Novel Sensors for Robotics Applications: We are exploring how we can use emerging sensors in robotics applications. New sensors offer different tradeoffs and capabilities which provides opportunities for robotics usages. For example, we are working with Single Photon Avalanche Diode (SPAD) time of flight sensors which provide distance information in small, low-power packages. These sensors provide different information than more traditional ones: for example providing statistical distributions over an area, rather than detailed measurements.

Selected Past (but recent) Themes

segmentation2.png
Communicating Physical Interactions: We are working on ways for people and robots to communicate to each other about how objects should be manipulated in the world. Manipulations necessarily involve physical interactions (e.g., forces must be applied correctly). We are exploring ways for people to tell robots how to act with appropriate forces (e.g., to teach manipulation skills) as well as for robots to communicate back to people about the actions they are performing.
SA_compare2.png
Communicative Robot Motions: If robots are going to work around people, it will be important that people can interpret the robots movements correctly. We are developing ways to make robots move such that people will interpret them correctly. For example, we are considering how to design robot control algorithms such that the resulting movements are understandable, predictable, aesthetically pleasing, and convey a sense of appropriate affect (e.g. confidence).
scatterplot-teaser.png
Interacting with Machine Learning: People interact with machine learning systems in many ways: they must build them, debug them, diagnose them, decide to trust them, gain insights on their data from them, etc. We are exploring this in both directions: How do we build machine learning tools into interactive data analysis in order to help people interpret large and complex data? How do we build interaction tools that can help people construct and diagnose machine learning models?
serendip-teaser.png
Visualizing Comparisons for Data Science: Data interpretation tasks often involve making comparisons among the data, or can be thought of as comparisons. We are developing better visualization tools for performing comparisons for various data challenges, as well as to developing better methods for inventing new designs.

fig1.png
Perceptual Principles for Visualization: Understanding how people see can inform how we should design visualizations. We have been exploring how recent results in perception (e.g., ensemble encoding) can be exploited to create novel visualization designs, and how principles of perception can inform visualization designs.
pub_pics_small_jumpers.png
Video, Animation and Image Authoring: Our goal is to make it easier for people to create useable images and video. For example, we have developed methods for improving pictures and video as a post-process (e.g. removing shadows and stabilizing video). We have also worked on adapting imagery for use in new settings (e.g. image and video retargeting or automatic video editing) and making use of large image collections (e.g. intestingness detection or panorama finding).

Teaching

These are the main classes I teach. You can see more on the Graphics Group Courses Page.

Older classes that might not get taught again for a while:

  • CS777: Computer Animation is a graduate level CS class for people with some graphics background. This taught was taught regularly in the past (2013, 2011, 2006, 2004, 2003). It kindof died off from lack of interest (student interest and my interest)
  • CS679 Computer Games Technologies: this class was popular, so I tried to teach it regularly for several years 2012, 2011, 2010).
  • Advanced Graphics: In the Spring of 2009, I taught an Advanced Graphics class.

You can find other information on graphics group classes on the Graphics Group Courses Page.

Selected Recent Publications

A (pretty) complete list is available here. Here are some selected recent ones:

  • RAL ‘22 (IROS ‘22): Geometric Calibration of Single-Pixel Distance Sensors w/ Sifferman and Gupta
  • RSS ‘22: Proxima: An Approach for Time or Accuracy Budgeted Collision Proximity Queries w/Rakita and Mutlu
  • TVCG ‘22: embComp: Visual Interactive Comparison of Vector Embeddings w/Heimerl et al.
  • Haptics ‘22: Assessing the Perceived Realism of Kinesthetic Haptic Renderings Under Parameter Variations w/Zhang et al.
  • Visual Informatics ‘22: Trinary tools for continuously valued binary classifiers w/Yu and Chen